scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 2006"


Journal ArticleDOI
TL;DR: It is shown that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains.
Abstract: We propose a general methodology based on robust optimization to address the problem of optimally controlling a supply chain subject to stochastic demand in discrete time. This problem has been studied in the past using dynamic programming, which suffers from dimensionality problems and assumes full knowledge of the demand distribution. The proposed approach takes into account the uncertainty of the demand in the supply chain without assuming a specific distribution, while remaining highly tractable and providing insight into the corresponding optimal policy. It also allows adjustment of the level of robustness of the solution to trade off performance and protection against uncertainty. An attractive feature of the proposed approach is its numerical tractability, especially when compared to multidimensional dynamic programming problems in complex supply chains, as the robust problem is of the same difficulty as the nominal problem, that is, a linear programming problem when there are no fixed costs, and a mixed-integer programming problem when fixed costs are present. Furthermore, we show that the optimal policy obtained in the robust approach is identical to the optimal policy obtained in the nominal case for a modified and explicitly computable demand sequence. In this way, we show that the structure of the optimal robust policy is of the same base-stock character as the optimal stochastic policy for a wide range of inventory problems in single installations, series systems, and general supply chains. Preliminary computational results are very promising.

619 citations


Journal ArticleDOI
TL;DR: A mixed-integer programming formulation of the dial-a-ride problem and a branch-and-cut algorithm used for the traveling salesman, the vehicle routing, and the pick-up and delivery problems are introduced.
Abstract: In the dial-a-ride problem, users formulate requests for transportation from a specific origin to a specific destination. Transportation is carried out by vehicles providing a shared service. The problem consists of designing a set of minimum-cost vehicle routes satisfying capacity, duration, time window, pairing, precedence, and ride-time constraints. This paper introduces a mixed-integer programming formulation of the problem and a branch-and-cut algorithm. The algorithm uses new valid inequalities for the dial-a-ride problem as well as known valid inequalities for the traveling salesman, the vehicle routing, and the pick-up and delivery problems. Computational experiments performed on randomly generated instances show that the proposed approach can be used to solve small to medium-size instances.

585 citations


Journal ArticleDOI
TL;DR: This paper suggests a method for the exact simulation of the stock price and variance under Hestons stochastic volatility model and other affine jump diffusion processes and achieves an O(s-1/2) convergence rate, where s is the total computational budget.
Abstract: The stochastic differential equations for affine jump diffusion models do not yield exact solutions that can be directly simulated. Discretization methods can be used for simulating security prices under these models. However, discretization introduces bias into the simulation results, and a large number of time steps may be needed to reduce the discretization bias to an acceptable level. This paper suggests a method for the exact simulation of the stock price and variance under Hestons stochastic volatility model and other affine jump diffusion processes. The sample stock price and variance from the exact distribution can then be used to generate an unbiased estimator of the price of a derivative security. We compare our method with the more conventional Euler discretization method and demonstrate the faster convergence rate of the error in our method. Specifically, our method achieves an O(s-1/2) convergence rate, where s is the total computational budget. The convergence rate for the Euler discretization method is O(s-1/3) or slower, depending on the model coefficients and option payoff function.

543 citations


Journal ArticleDOI
TL;DR: The development of CALIBRA is described, a procedure that attempts to find the best values for up to five search parameters associated with a procedure under study and is able to find parameter values that either match or improve the performance of the procedures resulting from using the parameter values suggested by their developers.
Abstract: Researchers and practitioners frequently spend more time fine-tuning algorithms than designing and implementing them. This is particularly true when developing heuristics and metaheuristics, where the right choice of values for search parameters has a considerable effect on the performance of the procedure. When testing metaheuristics, performance typically is measured considering both the quality of the solutions obtained and the time needed to find them. In this paper, we describe the development of CALIBRA, a procedure that attempts to find the best values for up to five search parameters associated with a procedure under study. Because CALIBRA uses Taguchis fractional factorial experimental designs coupled with a local search procedure, the best values found are not guaranteed to be optimal. We test CALIBRA on six existing heuristic-based procedures. These experiments show that CALIBRA is able to find parameter values that either match or improve the performance of the procedures resulting from using the parameter values suggested by their developers. The latest version of CALIBRA can be downloaded for free from the website that appears in the online supplement of this paper at http://or.pubs.informs.org/Pages.collect.html.

407 citations


Journal ArticleDOI
TL;DR: Computational results on two specific classes of hard-to-solve MIPs indicate that the new method produces a reformulation which can be solved some orders of magnitude faster than the original MIP model.
Abstract: Mixed-integer programs (MIPs) involving logical implications modeled through big-M coefficients are notoriously among the hardest to solve. In this paper, we propose and analyze computationally an automatic problem reformulation of quite general applicability, aimed at removing the model dependency on the big-M coefficients. Our solution scheme defines a master integer linear problem (ILP) with no continuous variables, which contains combinatorial information on the feasible integer variable combinations that can be “distilled” from the original MIP model. The master solutions are sent to a slave linear program (LP), which validates them and possibly returns combinatorial inequalities to be added to the current master ILP. The inequalities are associated to minimal (or irreducible) infeasible subsystems of a certain linear system, and can be separated efficiently in case the master solution is integer. The overall solution mechanism closely resembles the Benders' one, but the cuts we produce are purely co...

364 citations


Journal ArticleDOI
TL;DR: This work provides alternative characterizations of the IGFR property that lead to simplify verifying whether the IG FR condition holds and relates the limit of the generalized failure rate and the moments of a distribution.
Abstract: Distributions with an increasing generalized failure rate (IGFR) have useful applications in pricing and supply chain contracting problems. We provide alternative characterizations of the IGFR property that lead to simplify verifying whether the IGFR condition holds. We also relate the limit of the generalized failure rate and the moments of a distribution.

325 citations


Journal ArticleDOI
TL;DR: This work forms the problem of managing patient demand for diagnostic service as a finite-horizon dynamic program and identifies properties of the optimal policies to the various cost and probability parameters.
Abstract: Hospital diagnostic facilities, such as magnetic resonance imaging centers, typically provide service to several diverse patient groups: outpatients, who are scheduled in advance; inpatients, whose demands are generated randomly during the day; and emergency patients, who must be served as soon as possible. Our analysis focuses on two interrelated tasks: designing the outpatient appointment schedule, and establishing dynamic priority rules for admitting patients into service. We formulate the problem of managing patient demand for diagnostic service as a finite-horizon dynamic program and identify properties of the optimal policies. Using empirical data from a major urban hospital, we conduct numerical studies to develop insights into the sensitivity of the optimal policies to the various cost and probability parameters and to evaluate the performance of several heuristic rules for appointment acceptance and patient scheduling.

280 citations


Journal ArticleDOI
TL;DR: In this article, an optimization-via-simulation algorithm, called COMPASS, was proposed for estimating the performance measure via a stochastic, discrete-event simulation, and the decision variables were integer ordered.
Abstract: We propose an optimization-via-simulation algorithm, called COMPASS, for use when the performance measure is estimated via a stochastic, discrete-event simulation, and the decision variables are integer ordered. We prove that COMPASS converges to the set of local optimal solutions with probability 1 for both terminating and steady-state simulation, and for both fully constrained problems and partially constrained or unconstrained problems under mild conditions.

261 citations


Journal ArticleDOI
Ward Whitt1
TL;DR: Deterministic fluid models are developed to provide simple first-order performance descriptions for multiserver queues with abandonment under heavy loads, and accurately shows that steady-state performance depends strongly upon the time-to-abandon distribution beyond its mean, but not upon the service-time distribution Beyond its mean.
Abstract: Deterministic fluid models are developed to provide simple first-order performance descriptions for multiserver queues with abandonment under heavy loads. Motivated by telephone call centers, the focus is on multiserver queues with a large number of servers and nonexponential service-time and time-to-abandon distributions. The first fluid model serves as an approximation for the G/GI/sGI queueing model, which has a general stationary arrival process with arrival rate , independent and identically distributed (IID) service times with a general distribution, s servers and IID abandon times with a general distribution. The fluid model is useful in the overloaded regime, where > s, which is often realistic because only a small amount of abandonment can keep the system stable. Numerical experiments, using simulation for M/GI/sGI models and exact numerical algorithms for M/M/sM models, show that the fluid model provides useful approximations for steady-state performance measures when the system is heavily loaded. The fluid model accurately shows that steady-state performance depends strongly upon the time-to-abandon distribution beyond its mean, but not upon the service-time distribution beyond its mean. The second fluid model is a discrete-time fluid model, which serves as an approximation for the Gt (n)/GI/sGI queueing model, having a state-dependent and time-dependent arrival process. The discrete-time framework is exploited to prove that properly scaled queueing processes in the queueing model converge to fluid functions as s . The discrete-time framework is also convenient for calculating the time-dependent fluid performance descriptions.

236 citations


Journal ArticleDOI
TL;DR: It is shown that the optimal replenishment policy is of a threshold type, i.e., it is optimal to produce if and only if the starting Inventory in a period is below a threshold value, and that both the optimal production quantity and the optimal price in each period are decreasing in the starting inventory.
Abstract: We study the joint inventory replenishment and pricing problem for production systems with random demand and yield. More specifically, we analyze the following single-item, periodic-review model. Demands in consecutive periods are independent random variables and their distributions are price sensitive. The production yield is uncertain so that the quantity received from a replenishment is a random variable whose distribution depends on the production quantity. Stockouts are fully backlogged. Our problem is to characterize the optimal dynamic policy that simultaneously determines the production quantity and the price for each period to maximize the total discounted profit. We show that the optimal replenishment policy is of a threshold type, i.e., it is optimal to produce if and only if the starting inventory in a period is below a threshold value, and that both the optimal production quantity and the optimal price in each period are decreasing in the starting inventory. We further study the operational e...

206 citations


Journal ArticleDOI
TL;DR: This work considers the supply chain of a manufacturer who produces time-sensitive products that have a large variety, a short life cycle, and are sold in a very short selling season and proposes several fast heuristics for the intractable problems.
Abstract: We consider the supply chain of a manufacturer who produces time-sensitive products that have a large variety, a short life cycle, and are sold in a very short selling season. The supply chain consists of multiple overseas plants and a domestic distribution center (DC). Retail orders are first processed at the plants and then shipped from the plants to the DC for distribution to domestic retailers. Due to variations in productivity and labor costs at different plants, the processing time and cost of an order are dependent on the plant to which it is assigned. We study the following static and deterministic order assignment and scheduling problem faced by the manufacturer before every selling season: Given a set of orders, determine which orders are to be assigned to each plant, find a schedule for processing the assigned orders at each plant, and find a schedule for shipping the completed orders from each plant to the DC, such that a certain performance measure is optimized. We consider four different performance measures, all of which take into account both delivery lead time and the total production and distribution cost. A problem corresponding to each performance measure is studied separately. We analyze the computational complexity of various cases of the problems by either proving that a problem is intractable or providing an efficient exact algorithm for the problem. We propose several fast heuristics for the intractable problems. We analyze the worst-case and asymptotic performance of the heuristics and also computationally evaluate their performance using randomly generated test instances. Our results show that the heuristics are capable of generating near-optimal solutions quickly.

Journal ArticleDOI
TL;DR: A mathematical framework is developed to analyze the process by which airlines forecast demand and optimize booking controls over a sequence of flights and gives conditions under which spiral down occurs.
Abstract: The spiral-down effect occurs when incorrect assumptions about customer behavior cause high-fare ticket sales, protection levels, and revenues to systematically decrease over time. If an airline decides how many seats to protect for sale at a high fare based on past high-fare sales, while neglecting to account for the fact that availability of low-fare tickets will reduce high-fare sales, then high-fare sales will decrease, resulting in lower future estimates of high-fare demand. This subsequently yields lower protection levels for high-fare tickets, greater availability of low-fare tickets, and even lower high-fare ticket sales. The pattern continues, resulting in a so-called spiral down. We develop a mathematical framework to analyze the process by which airlines forecast demand and optimize booking controls over a sequence of flights. Within the framework, we give conditions under which spiral down occurs.

Journal ArticleDOI
TL;DR: A new model that has the potential to achieve most of the goals with respect to the quality of a treatment plan for IMRT is proposed, in contrast with established mixed-integer and nonlinear programming formulations, while retaining linearity of the optimization problem.
Abstract: We consider the problem of radiation therapy treatment planning for cancer patients. During radiation therapy, beams of radiation pass through a patient, killing both cancerous and normal cells. Thus, the radiation therapy must be carefully planned so that a clinically prescribed dose is delivered to targets containing cancerous cells, while nearby organs and tissues are spared. Currently, a technique called intensity-modulated radiation therapy (IMRT) is considered to be the most effective radiation therapy for many forms of cancer. In IMRT, the patient is irradiated from several beams, each of which is decomposed into hundreds of small beamlets, the intensities of which can be controlled individually. In this paper, we consider the problem of designing a treatment plan for IMRT when the orientations of the beams are given. We propose a new model that has the potential to achieve most of the goals with respect to the quality of a treatment plan that have been considered to date. However, in contrast with established mixed-integer and nonlinear programming formulations, we do so while retaining linearity of the optimization problem, which substantially improves the tractability of the optimization problem. Furthermore, we discuss how several additional quality and practical aspects of the problem that have been ignored to date can be incorporated into our linear model. We demonstrate the effectiveness of our approach on clinical data.

Journal ArticleDOI
TL;DR: Using a multiplicative demand model, this paper fully characterize individual firms' decisions in equilibria, under each of the two game settings, and derive closed-form performance measures, both for the channel and for individual channel members.
Abstract: Consider n manufacturers, each producing a different product and selling it to a market, either directly or through a common retailer. The n products are perfectly complementary in the sense that they are always sold and consumed jointly or in sets of one unit of each. Demand for the products during a selling season is both price sensitive and uncertain. Each of the n manufacturers faces the problem of choosing a production quantity and a selling price for his product. Two settings are considered, regarding the decision sequence of the n manufacturers: They are either simultaneous or sequential. The retailer, when present, employs a consignment-sales contract with revenue sharing to bind her relationship with the manufacturers and to extract profit for herself. Using a multiplicative demand model in this paper, we fully characterize individual firms' decisions in equilibria, under each of the two game settings, and derive closed-form performance measures, both for the channel and for individual channel members. These closed-form solutions allow us to explore the effects of channel structure and parameters on firms' decisions and performance that lead to conclusions of managerial interest.

Journal ArticleDOI
TL;DR: A new heuristic algorithm is presented for the two-dimensional irregular stock-cutting problem, which generates significantly better results than the previous state of the art on a wide range of established benchmark problems.
Abstract: This paper presents a new heuristic algorithm for the two-dimensional irregular stock-cutting problem, which generates significantly better results than the previous state of the art on a wide range of established benchmark problems. The developed algorithm is able to pack shapes with a traditional line representation, and it can also pack shapes that incorporate circular arcs and holes. This in itself represents a significant improvement upon the state of the art. By utilising hill climbing and tabu local search methods, the proposed technique produces 25 new best solutions for 26 previously reported benchmark problems drawn from over 20 years of cutting and packing research. These solutions are obtained using reasonable time frames, the majority of problems being solved within five minutes. In addition to this, we also present 10 new benchmark problems, which involve both circular arcs and holes. These are provided because of a shortage of realistic industrial style benchmark problems within the literature and to encourage further research and greater comparison between this and future methods.

Journal ArticleDOI
TL;DR: This paper aims at computing the maximum EVDI over all f ∈ F for any order quantity, and an optimization procedure is provided to calculate the order quantity that minimizes themaximum EVDI.
Abstract: This paper extends previous work on the distribution-free newsvendor problem, where only partial information about the demand distribution is available. More specifically, the analysis assumes that the demand distribution f belongs to a class of probability distribution functions (pdf) F with mean μ and standard deviation σ. While previous work has examined the expected value of distribution information (EVDI) for a particular order quantity and a particular pdf f, this paper aims at computing the maximum EVDI over all f ∈ F for any order quantity. In addition, an optimization procedure is provided to calculate the order quantity that minimizes the maximum EVDI.

Journal ArticleDOI
TL;DR: In this paper, a call center model with m customer classes and r agent pools is analyzed, and the authors prove an asymptotic lower bound on expected total cost, which uses a strikingly simple distillation of the original system data.
Abstract: This paper analyzes a call center model with m customer classes and r agent pools. The model is one with doubly stochastic arrivals, which means that the m-vector of instantaneous arrival rates is allowed to vary both temporally and stochastically. Two levels of call center management are considered: staffing the r pools of agents, and dynamically routing calls to agents. The system managers objective is to minimize the sum of personnel costs and abandonment penalties. We consider a limiting parameter regime that is natural for call centers and relatively easy to analyze, but apparently novel in the literature of applied probability. For that parameter regime, we prove an asymptotic lower bound on expected total cost, which uses a strikingly simple distillation of the original system data. We then propose a method for staffing and routing based on linear programming (LP), and show that it achieves the asymptotic lower bound on expected total cost; in that sense the proposed method is asymptotically optimal.

Journal ArticleDOI
TL;DR: A nonparametric approach to multiproduct pricing is formulated that establishes that computing optimal prices with respect to a sample of consumer data is NP-complete in the strong sense and suggests enhancements that may lead to a more effective pricing strategy.
Abstract: Developed by General Motors (GM), the Auto Choice Advisor website (http://www.autochoiceadvisor.com) recommends vehicles to consumers based on their requirements and budget constraints. Through the website, GM has access to large quantities of data that reflect consumer preferences. Motivated by the availability of such data, we formulate a nonparametric approach to multiproduct pricing. We consider a class of models of consumer purchasing behavior, each of which relates observed data on a consumers requirements and budget constraint to subsequent purchasing tendencies. To price products, we aim at optimizing prices with respect to a sample of consumer data. We offer a bound on the sample size required for the resulting prices to be near-optimal with respect to the true distribution of consumers. The bound exhibits a dependence of O(n log n) on the number n of products being priced, showing thatin terms of sample complexitythe approach is scalable to large numbers of products. With regards to computational complexity, we establish that computing optimal prices with respect to a sample of consumer data is NP-complete in the strong sense. However, when prices are constrained by a price ladderan ordering of prices defined prior to price determinationthe problem becomes one of maximizing a supermodular function with real-valued variables. It is not yet known whether this problem is NP-hard. We provide a heuristic for our price-ladder-constrained problem, together with encouraging computational results. Finally, we apply our approach to a data set from the Auto Choice Advisor website. Our analysis provides insights into the current pricing policy at GM and suggests enhancements that may lead to a more effective pricing strategy.

Journal ArticleDOI
TL;DR: This work presents fully sequential procedures for steady-state simulation that are designed to select the best of a finite number of simulated systems when best is defined by the largest or smallest long-run average performance.
Abstract: We present fully sequential procedures for steady-state simulation that are designed to select the best of a finite number of simulated systems when best is defined by the largest or smallest long-run average performance. We also provide a framework for establishing the asymptotic validity of such procedures and prove the validity of our procedures. An example based on the M/M/1 queue is given.

Journal ArticleDOI
TL;DR: Two linear factories and a re-entrant factory, each one modeled by a hyperbolic conservation law, are linked to provide proof of concept for efficient supply chain simulations.
Abstract: High-volume, multistage continuous production flow through a re-entrant factory is modeled through a conservation law for a continuous-density variable on a continuous-production line augmented by a state equation for the speed of the production along the production line. The resulting nonlinear, nonlocal hyperbolic conservation law allows fast and accurate simulations. Little's law is built into the model. It is argued that the state equation for a re-entrant factory should be nonlinear. Comparisons of simulations of the partial differential equation (PDE) model and discrete-event simulations are presented. A general analysis of the model shows that for any nonlinear state equation there exist two steady states of production below a critical start rate: A high-volume, high-throughput time state and a low-volume, low-throughput time state. The stability of the low-volume state is proved. Output is controlled by adjusting the start rate to a changed demand rate. Two linear factories and a re-entrant factory, each one modeled by a hyperbolic conservation law, are linked to provide proof of concept for efficient supply chain simulations. Instantaneous density and flux through the supply chain as well as work in progress (WIP) and output as a function of time are presented. Extensions to include multiple product flows and preference rules for products and dispatch rules for re-entrant choices are discussed.

Journal ArticleDOI
TL;DR: In this paper, a branch-and-bound algorithm for solving the multiple depot vehicle scheduling problem is proposed, which combines column generation, variable fixing, and cutting planes, and it is shown that the solutions of the linear relaxation of the problem contain many odd cycles.
Abstract: We consider the multiple depot vehicle scheduling problem (MDVSP) and propose a branch-and-bound algorithm for solving it that combines column generation, variable fixing, and cutting planes. We show that the solutions of the linear relaxation of the MDVSP contain many odd cycles. We derive a class of valid inequalities by extending the notion of odd cycle and describe a lifting procedure for these inequalities. We prove that the lifted inequalities represent, under certain conditions, facets of the underlying polytope. Finally, we present the results of a computational study comparing several strategies (variable fixing, cutting planes, mixed branching, and tree search) for solving the MDVSP.

Journal ArticleDOI
TL;DR: A multiechelon inventory system with inventory stages arranged in series that can be solved by decomposition into a sequence of single-stage systems, with each downstream stage following an echelon base-stock policy and the most upstream stage following a three-parameter policy with a simple structure.
Abstract: We analyze a multiechelon inventory system with inventory stages arranged in series. In addition to traditional forward material flows, used products are returned to a recovery facility, where they can be stored, disposed, or remanufactured and shipped to one of the stages to re-enter the forward flow of material. This system combines the key elements of two simpler systems: the series system studied by Clark and Scarf (1960) and the single-stage remanufacturing systems studied by Simpson (1978) and Inderfurth (1997). We focus on identifying the structure of the optimal remanufacturing/ordering/disposal policy for such a system. In particular, we investigate whether the optimal policy inherits the basic structural properties of the simpler systems. We show that if remanufactured items flow into the most upstream stage, then this is the case. Specifically, the system can be solved by decomposition into a sequence of single-stage systems, with each downstream stage following an echelon base-stock policy and the most upstream stage following a three-parameter policy with a simple (and intuitive) structure. We show that similar results hold when remanufactured products flow into a downstream stage; however, in this case some modifications must be made. In particular, the definition of echelon inventory must be adjusted for stages upstream of the remanufacturing stage, and disposal of used items can no longer be allowed. We also compare the information required for managing this system to that required in the Clark and Scarf or Inderfurth settings, and we point out how the requirements are somewhat different depending on whether remanufacturing occurs upstream or downstream.

Journal ArticleDOI
TL;DR: In this article, a mechanism design study for a monopolist selling multiple identical items to potential buyers arriving over time is presented, where participants in the model are time sensitive, with the same discount factor, potential buyers have unit demand and arrive sequentially according to a renewal process; and valuations are drawn independently from the same regular distribution.
Abstract: This paper is a mechanism design study for a monopolist selling multiple identical items to potential buyers arriving over time. Participants in our model are time sensitive, with the same discount factor; potential buyers have unit demand and arrive sequentially according to a renewal process; and valuations are drawn independently from the same regular distribution. Invoking the revelation principle, we restrict our attention to direct dynamic mechanisms taking a sequence of valuations and arrival epochs as input. We define two properties (discreteness and stability), and prove under further distributional assumptions that we may at no cost of generality consider only mechanisms satisfying them. This effectively reduces the mechanism input to a sequence of valuations and leads to formulate the problem as a dynamic program (DP). As this DP is equivalent to a well-known infinite-horizon asset-selling problem, we finally characterize the optimal mechanism as a sequence of posted prices increasing with each sale. Remarkably, this result rationalizes somewhat the frequent restriction to dynamic pricing policies and impatient buyers assumption. Our numerical study indicates that, under various valuation distributions, the benefit of dynamic pricing over a fixed posted price may be small. Besides, posted prices are preferable to online auctions for a large number of items or high interest rate, but in other cases auctions are close to optimal and significantly more robust.

Journal ArticleDOI
TL;DR: In this paper, a power portfolio optimization model that is intended as a decision aid for scheduling and hedging (DASH) in the wholesale power market is proposed, which integrates the unit commitment model with financial decision making by including the forwards and spot market activity within the scheduling decision model.
Abstract: We consider a power portfolio optimization model that is intended as a decision aid for scheduling and hedging (DASH) in the wholesale power market. Our multiscale model integrates the unit commitment model with financial decision making by including the forwards and spot market activity within the scheduling decision model. The methodology is based on a multiscale stochastic programming model that selects portfolio positions that perform well on a variety of scenarios generated through statistical modeling and optimization. When compared with several commonly used fixed-mix policies, our experiments demonstrate that the DASH model provides significant advantages.

Journal ArticleDOI
TL;DR: In this paper, a dynamic programming algorithm for solving the single-unit commitment (1UC) problem with ramping constraints and arbitrary convex cost functions is presented, which is based on a new approach for efficiently solving the one-unit economic dispatch (ED) problem, improving on previously known ones that were limited to piecewise linear functions.
Abstract: We present a dynamic programming algorithm for solving the single-unit commitment (1UC) problem with ramping constraints and arbitrary convex cost functions. The algorithm is based on a new approach for efficiently solving the single-unit economic dispatch (ED) problem with ramping constraints and arbitrary convex cost functions, improving on previously known ones that were limited to piecewise-linear functions. For simple convex functions, such as the quadratic ones typically used in applications, the solution cost of all the involved (ED) problems, consisting of finding an optimal ˜

Journal ArticleDOI
TL;DR: This disclosure relates to a metering dispensing closure in which a pair of chambers are selectively placed in fluid communication with each other by an axially movable gravity actuated valve housed in one of the chambers.
Abstract: We consider the problem of dynamically cross-selling products (e.g., books) or services (e.g., travel reservations) in the e-commerce setting. In particular, we look at a company that faces a stream of stochastic customer arrivals and may offer each customer a choice between the requested product and a package containing the requested product as well as another product, what we call a “packaging complement.” Given consumer preferences and product inventories, we analyze two issues: (1) how to select packaging complements, and (2) how to price product packages to maximize profits. We formulate the cross-selling problem as a stochastic dynamic program blended with combinatorial optimization. We demonstrate the state-dependent and dynamic nature of the optimal package selection problem and derive the structural properties of the dynamic pricing problem. In particular, we focus on two practical business settings: with (the Emergency Replenishment Model) and without (the Lost-Sales Model) the possibility of inventory replenishment in the case of a product stockout. For the Emergency Replenishment Model, we establish that the problem is separable in the initial inventory of all products, and hence the dimensionality of the dynamic program can be significantly reduced. For both models, we suggest several packaging/pricing heuristics and test their effectiveness numerically.

Journal ArticleDOI
TL;DR: This work presents a linear integer programming framework incorporating spatial contiguity as an additional site selection criterion for conservation reserve design and generates a significantly more efficient reserve than a heuristic selection.
Abstract: Spatial considerations are important in conservation reserve design. A particularly important spatial requirement is the connectivity of selected sites. Direct connections between reserve sites increase the likelihood of species persistence by allowing dispersal and colonization of other areas within the network without species having to leave the reserve. The conventional set-covering and maximal-covering formulations of the reserve selection problem assume that species representation is the only criterion in site selection. This approach usually results in a small but highly fragmented reserve, which may not be desirable. We present a linear integer programming framework incorporating spatial contiguity as an additional site selection criterion. An empirical application to a data set on the occurrence of breeding birds in Berkshire, United Kingdom, demonstrates that site connectivity requires a significantly larger reserve. Incorporation of spatial criteria increases the computational complexity of the problem. To overcome this, we use a two-stage procedure where the original sites are aggregated first and an optimum solution is determined for the aggregate sites. Then, site selection is restricted to original sites included in the aggregate solution and a connected reserve is determined. In this particular application the above procedure generated a significantly more efficient reserve than a heuristic selection.

Journal ArticleDOI
TL;DR: A branch-and-cut algorithm for solving linear programs (LPs) with continuous separable piecewise-linear cost functions (PLFs) and gives two families of valid inequalities, which demonstrate the effectiveness of the cuts.
Abstract: We give a branch-and-cut algorithm for solving linear programs (LPs) with continuous separable piecewise-linear cost functions (PLFs). Models for PLFs use continuous variables in special-ordered sets of type 2 (SOS2). Traditionally, SOS2 constraints are enforced by introducing auxiliary binary variables and other linear constraints on them. Alternatively, we can enforce SOS2 constraints by branching on them, thus dispensing with auxiliary binary variables. We explore this approach further by studying the inequality description of the convex hull of the feasible set of LPs with PLFs in the space of the continuous variables, and using the new cuts in a branch-and-cut scheme without auxiliary binary variables. We give two families of valid inequalities. The first family is obtained by lifting the convexity constraints. The second family consists of lifted cover inequalities. Finally, we report computational results that demonstrate the effectiveness of our cuts, and that branch-and-cut without auxiliary binary variables is significantly more practical than the traditional mixed-integer programming approach.

Journal ArticleDOI
TL;DR: This work studies the use of two types of dual-optimal inequalities to accelerate and stabilize the whole convergence process, and proposes two methods for recovering primal feasibility and optimality, depending on the type of inequalities used.
Abstract: Column generation is one of the most successful approaches for solving large-scale linear programming problems. However, degeneracy difficulties and long-tail effects are known to occur as problems become larger. In recent years, several stabilization techniques of the dual variables have proven to be effective. We study the use of two types of dual-optimal inequalities to accelerate and stabilize the whole convergence process. Added to the dual formulation, these constraints are satisfied by all or a subset of the dual-optimal solutions. Therefore, the optimal objective function value of the augmented dual problem is identical to the original one. Adding constraints to the dual problem leads to adding columns to the primal problem, and feasibility of the solution may be lost. We propose two methods for recovering primal feasibility and optimality, depending on the type of inequalities that are used. Our computational experiments on the binary and the classical cutting-stock problems, and more specifically on the so-called triplet instances, show that the use of relevant dual information has a tremendous effect on the reduction of the number of column generation iterations.

Journal ArticleDOI
TL;DR: Numerical results indicate that, in a deregulated market, interruptible contracts can help alleviate supply problems associated with spikes of price and demand and that competition between retailers results in lower value and less frequent interruption.
Abstract: We consider interruptible electricity contracts issued by an electricity retailer that allow for interruptions to electric service in exchange for either an overall reduction in the price of electricity delivered or for financial compensation at the time of interruption. We provide a structural model to determine electricity prices based on stochastic models of supply and demand. We use stochastic dynamic programming to value interruptible contracts from the point of view of an electricity retailer, and describe the optimal interruption strategy. We also demonstrate that structural models can be used to value contracts in competitive markets. Our numerical results indicate that, in a deregulated market, interruptible contracts can help alleviate supply problems associated with spikes of price and demand and that competition between retailers results in lower value and less frequent interruption.