scispace - formally typeset
Search or ask a question

Showing papers in "Operations Research in 2017"


Journal ArticleDOI
TL;DR: Rank Centrality as mentioned in this paper is an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons, which has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects.
Abstract: The question of aggregating pairwise comparisons to obtain a global ranking over a collection of objects has been of interest for a very long time: be it ranking of online gamers (e.g., MSR’s TrueSkill system) and chess players, aggregating social opinions, or deciding which product to sell based on transactions. In most settings, in addition to obtaining a ranking, finding ‘scores’ for each object (e.g., player’s rating) is of interest for understanding the intensity of the preferences. In this paper, we propose Rank Centrality, an iterative rank aggregation algorithm for discovering scores for objects (or items) from pairwise comparisons. The algorithm has a natural random walk interpretation over the graph of objects with an edge present between a pair of objects if they are compared; the score, which we call Rank Centrality, of an object turns out to be its stationary probability under this random walk. To study the efficacy of the algorithm, we consider the popular Bradley-Terry-Luce (BTL) model (equ...

208 citations


Journal ArticleDOI
TL;DR: This paper proposes a general-purpose branch-and-cut exact solution method based on several new classes of valid inequalities, which also exploits a very effective bilevel-specific preprocessing procedure.
Abstract: Bilevel optimization problems are very challenging optimization models arising in many important practical contexts, including pricing mechanisms in the energy sector, airline and telecommunication industry, transportation networks, critical infrastructure defense, and machine learning. In this paper, we consider bilevel programs with continuous and discrete variables at both levels, with linear objectives and constraints (continuous upper level variables, if any, must not appear in the lower level problem). We propose a general-purpose branch-and-cut exact solution method based on several new classes of valid inequalities, which also exploits a very effective bilevel-specific preprocessing procedure. An extensive computational study is presented to evaluate the performance of various solution methods on a common testbed of more than 800 instances from the literature and 60 randomly generated instances. Our new algorithm consistently outperforms (often by a large margin) alternative state-of-the-art metho...

147 citations


Journal ArticleDOI
TL;DR: It is demonstrated that the pessimistic joint chance constraints are conic representable if (i) the constraint coefficients of the decisions are deterministic, (ii) the support set of the uncertain parameters is a cone, and (iii) the dispersion function is of first order, that is, it is positively homogeneous.
Abstract: We study joint chance constraints where the distribution of the uncertain parameters is only known to belong to an ambiguity set characterized by the mean and support of the uncertainties and by an...

134 citations


Journal ArticleDOI
TL;DR: Information-directed sampling as mentioned in this paper is a new approach to online optimization problems in which a decision maker must balance between exploration and exploitation while learning from partial feedback, and each action is sampled in a manner that minimizes the ratio between squared expected single-period regret and a measure of information gain.
Abstract: We propose information-directed sampling—a new approach to online optimization problems in which a decision maker must balance between exploration and exploitation while learning from partial feedback. Each action is sampled in a manner that minimizes the ratio between squared expected single-period regret and a measure of information gain: the mutual information between the optimal action and the next observation. We establish an expected regret bound for information-directed sampling that applies across a very general class of models and scales with the entropy of the optimal action distribution. We illustrate through simple analytic examples how information-directed sampling accounts for kinds of information that alternative approaches do not adequately address and that this can lead to dramatic performance gains. For the widely studied Bernoulli, Gaussian, and linear bandit problems, we demonstrate state-of-the-art simulation performance. The electronic companion is available at https://doi.org/10.128...

124 citations


Journal ArticleDOI
TL;DR: An iterative refinement algorithm using partially time-expanded networks that solves continuous-time service network design problems and demonstrates that the algorithm not only solves problems but also obtains an optimal solution at each point in time.
Abstract: Consolidation carriers transport shipments that are small relative to trailer capacity. To be cost effective, the carrier must consolidate shipments, which requires coordinating their paths in both space and time; i.e., the carrier must solve a service network design problem. Most service network design models rely on discretization of time—i.e., instead of determining the exact time at which a dispatch should occur, the model determines a time interval during which a dispatch should occur. While the use of time discretization is widespread in service network design models, a fundamental question related to its use has never been answered: Is it possible to produce an optimal continuous-time solution without explicitly modeling each point in time? We answer this question in the affirmative. We develop an iterative refinement algorithm using partially time-expanded networks that solves continuous-time service network design problems. An extensive computational study demonstrates that the algorithm not only...

119 citations


Journal ArticleDOI
TL;DR: This work considers revenue management problems when customers choose among the offered products according to the Markov chain choice model, and gives a linear program to obtain the optimal solution.
Abstract: We consider revenue management problems when customers choose among the offered products according to the Markov chain choice model. In this choice model, a customer arrives into the system to purchase a particular product. If this product is available for purchase, then the customer purchases it. Otherwise, the customer transitions to another product or to the no purchase option, until she reaches an available product or the no purchase option. We consider three classes of problems. First, we study assortment problems, where the goal is to find a set of products to offer to maximize the expected revenue obtained from each customer. We give a linear program to obtain the optimal solution. Second, we study single resource revenue management problems, where the goal is to adjust the set of offered products over a selling horizon when the sale of each product consumes the resource. We show how the optimal set of products to offer changes with the remaining resource inventory. Third, we study network revenue ...

113 citations


Journal ArticleDOI
TL;DR: A theoretical model and dispatching policy are proposed for how many copies one needs to make to achieve a response time benefit, and the magnitude of the potential gains.
Abstract: Redundancy is an important strategy for reducing response time in multi-server distributed queueing systems. This strategy has been used in a variety of settings, but only recently have researchers begun analytical studies. The idea behind redundancy is that customers can greatly reduce response time by waiting in multiple queues at the same time, thereby experiencing the minimum time across queues. Redundancy has been shown to produce significant response time improvements in applications ranging from organ transplant waitlists to Google’s BigTable service. However, despite the growing body of theoretical and empirical work on the benefits of redundancy, there is little work addressing the questions of how many copies one needs to make to achieve a response time benefit, and the magnitude of the potential gains. In this paper we propose a theoretical model and dispatching policy to evaluate these questions. Our system consists of k servers, each with its own queue. We introduce the Redundancy-d policy, u...

100 citations


Journal ArticleDOI
TL;DR: A randomized algorithm is derived which delivers a solution within a factor O(log n/ log log n) of the optimum of the Asymmetric Traveling Salesman problem with high probability.
Abstract: We present a randomized O(log n/log log n)-approximation algorithm for the asymmetric traveling salesman problem (ATSP) This provides the first asymptotic improvement over the long-standing Θ(log n)-approximation bound stemming from the work of Frieze et al (1982) [Frieze AM, Galbiati G, Maffioki F (1982) On the worst-case performance of some algorithms for the asymmetric traveling salesman problem Networks 12(1):23–39] The key ingredient of our approach is a new connection between the approximability of the ATSP and the notion of so-called thin trees To exploit this connection, we employ maximum entropy rounding—a novel method of randomized rounding of LP relaxations of optimization problems We believe that this method might be of independent interest

94 citations


Journal ArticleDOI
TL;DR: An exact finite algorithm is proposed based on an optimal-value-function reformulation for bilevel mixed-integer programs whose constraints and objective functions depend on both upper- and lower-level variables and can be tailored to accommodate either optimistic or pessimistic assumptions on the follower behavior.
Abstract: We examine bilevel mixed-integer programs whose constraints and objective functions depend on both upper- and lower-level variables. The class of problems we consider allows for nonlinear terms to appear in both the constraints and the objective functions, requires all upper-level variables to be integer, and allows a subset of the lower-level variables to be integer. This class of bilevel problems is difficult to solve because the upper-level feasible region is defined in part by optimality conditions governing the lower-level variables, which are difficult to characterize because of the nonconvexity of the follower problem. We propose an exact finite algorithm for these problems based on an optimal-value-function reformulation. We demonstrate how this algorithm can be tailored to accommodate either optimistic or pessimistic assumptions on the follower behavior. Computational experiments demonstrate that our approach outperforms a state-of-the-art algorithm for solving bilevel mixed-integer linear programs.

89 citations


Journal ArticleDOI
TL;DR: A dynamic pricing model where the demand function is unknown but belongs to a known finite set and the seller is allowed to make at most m price changes during T periods is considered, to minimize the worst-case regret.
Abstract: In a dynamic pricing problem where the demand function is not known a priori, price experimentation can be used as a demand learning tool. Existing literature usually assumes no constraint on price changes, but in practice, sellers often face business constraints that prevent them from conducting extensive experimentation. We consider a dynamic pricing model where the demand function is unknown but belongs to a known finite set. The seller is allowed to make at most m price changes during T periods. The objective is to minimize the worst-case regret-i.e., the expected total revenue loss compared with a clairvoyant who knows the demand distribution in advance. We demonstrate a pricing policy that incurs a regret of OlogmT, or m iterations of the logarithm. Furthermore, we describe an implementation of this pricing policy at Groupon, a large e-commerce marketplace for daily deals. The field study shows significant impact on revenue and bookings. The e-companion is available at https://doi.org/10.1287/opre.2017.1629 .

77 citations


Journal ArticleDOI
TL;DR: In this article, the problem of planning sales promotions for an FMCG product in a grocery retail setting is considered, and a linear integer programming IP approximation of the POP is proposed.
Abstract: Sales promotions are important in the fast-moving consumer goods FMCG industry due to the significant spending on promotions and the fact that a large proportion of FMCG products are sold on promotion. This paper considers the problem of planning sales promotions for an FMCG product in a grocery retail setting. The category manager has to solve the promotion optimization problem POP for each product, i.e., how to select a posted price for each period in a finite horizon so as to maximize the retailer's profit. Through our collaboration with Oracle Retail, we developed an optimization formulation for the POP that can be used by category managers in a grocery environment. Our formulation incorporates business rules that are relevant, in practice. We propose general classes of demand functions including multiplicative and additive, which incorporate the post-promotion dip effect, and can be estimated from sales data. In general, the POP formulation has a nonlinear objective and is NP-hard. We then propose a linear integer programming IP approximation of the POP. We show that the IP has an integral feasible region, and hence can be solved efficiently as a linear program LP. We develop performance guarantees for the profit of the LP solution relative to the optimal profit. Using sales data from a grocery retailer, we first show that our demand models can be estimated with high accuracy, and then demonstrate that using the LP promotion schedule could potentially increase the profit by 3%, with a potential profit increase of 5% if some business constraints were to be relaxed. The online appendix is available at https://doi.org/10.1287/opre.2016.1573

Journal ArticleDOI
TL;DR: This model reflects this new opportunity to join the queue, balk, or inspect the queue length before deciding whether to join, and compute the equilibrium in this model and prove its existence and uniqueness.
Abstract: Classical models of customer decision making in unobservable queues assume acquiring queue length information is too costly. However, due to recent advancements in communication technology, various services now make this kind of information accessible to customers at a reasonable cost. In our model, which reflects this new opportunity, customers choose among three options: join the queue, balk, or inspect the queue length before deciding whether to join. Inspection is associated with a cost. We compute the equilibrium in this model and prove its existence and uniqueness. Based on two normalized parameters—congestion and service valuation—we map all possible input parameter sets into three scenarios. Each scenario is characterized by a different impact of inspection cost on equilibrium and revenue-maximization queue disclosure policy: fully observable (when inspection cost is very low), fully unobservable (when inspection cost is too high), or observable by demand (when inspection cost is at an intermediat...

Journal ArticleDOI
TL;DR: This work studies dynamic matching policies in a stochastic marketplace for barter, with agents arriving over time and finds that the cost outweighs the benefit from waiting to thicken the market.
Abstract: We study dynamic matching policies in a stochastic marketplace for barter, with agents arriving over time. Each agent is endowed with an item and is interested in an item possessed by another agent homogeneously with probability p, independently for all pairs of agents. Three settings are considered with respect to the types of allowed exchanges: (a) only two-way cycles, in which two agents swap items, (b) two-way or three-way cycles, (c) (unbounded) chains initiated by an agent who provides an item but expects nothing in return. We consider the average waiting time as a measure of efficiency and find that the cost outweighs the benefit from waiting to thicken the market. In particular, in each of the above settings, a policy that conducts exchanges in a greedy fashion is near optimal. Further, for small p, we find that allowing three-way cycles greatly reduces the waiting time over just two-way cycles, and conducting exchanges through a chain further reduces the waiting time significantly. Thus, a centra...

Journal ArticleDOI
TL;DR: A class of distributionally robust optimization models that incorporate the worst-case expected cost and the best-case conditional Value-at-Risk (CVaR) of appointment waiting, server idleness, and overtime as the objective or constraints are formulated.
Abstract: We consider a single-server scheduling problem given a fixed sequence of appointment arrivals with random no-shows and service durations. The probability distribution of the uncertain parameters is assumed to be ambiguous, and only the support and first moments are known. We formulate a class of distributionally robust (DR) optimization models that incorporate the worst-case expectation/conditional value-at-risk penalty cost of appointment waiting, server idleness, and overtime into the objective or constraints. Our models flexibly adapt to different prior beliefs of no-show uncertainty. We obtain exact mixed-integer nonlinear programming reformulations and derive valid inequalities to strengthen the reformulations that are solved by decomposition algorithms. In particular, we derive convex hulls for special cases of no-show beliefs, yielding polynomial-sized linear programming models for the least and the most conservative supports of no-shows. We test various instances to demonstrate the computational e...

Journal ArticleDOI
TL;DR: An empirical study of the impact of delay announcements on callers’ abandonment behavior and the performance of a call center with two priority classes finds that in this call center, callers' abandonment behavior is affected by the announcement messages heard.
Abstract: We undertake an empirical study of the impact of delay announcements on callers’ abandonment behavior and the performance of a call center with two priority classes A Cox regression analysis reveals that in this call center, callers’ abandonment behavior is affected by the announcement messages heard To account for this, we formulate a structural estimation model of callers’ (endogenous) abandonment decisions In this model, callers are forward-looking utility maximizers and make their abandonment decisions by solving an optimal stopping problem Each caller receives a reward from service and incurs a linear cost of waiting The reward and per-period waiting cost constitute the structural parameters that we estimate from the data of callers’ abandonment decisions as well as the announcement messages heard The call center performance is modeled by a Markovian approximation The main methodological contribution is the definition of an equilibrium in steady state as one where callers’ expectation of their

Journal ArticleDOI
TL;DR: The Course Match algorithm as mentioned in this paper performs a massive parallel heuristic search that solves billions of mixed-integer programs to output an approximate competitive equilibrium in a fake-money economy for courses.
Abstract: Combinatorial allocation involves assigning bundles of items to agents when the use of money is not allowed. Course allocation is one common application of combinatorial allocation, in which the bundles are schedules of courses and the assignees are students. Existing mechanisms used in practice have been shown to have serious flaws, which lead to allocations that are inefficient, unfair, or both. A recently developed mechanism is attractive in theory but has several features that limit its feasibility for practice. This paper reports on the design and implementation of a new course allocation mechanism, Course Match, that is suitable in practice. To find allocations, Course Match performs a massive parallel heuristic search that solves billions of mixed-integer programs to output an approximate competitive equilibrium in a fake-money economy for courses. Quantitative summary statistics for two semesters of full-scale use at a large business school the Wharton School of Business, which has about 1,700 students and up to 350 courses in each semester demonstrate that Course Match is both fair and efficient, a finding reinforced by student surveys showing large gains in satisfaction and perceived fairness.

Journal ArticleDOI
TL;DR: In an innovation tournament, an organizer solicits innovative ideas from a number of independent agents as discussed by the authors, but their outcomes are unknown due to the lack of information about the technologies used.
Abstract: In an innovation tournament, an organizer solicits innovative ideas from a number of independent agents. Agents exert effort to develop their solutions, but their outcomes are unknown due to techni...

Journal ArticleDOI
TL;DR: A closed-form approximation of EOC is established to formulate the budget allocation problem and derive the corresponding optimality conditions and links the EOC and PCS-based budget allocation problems by showing that the two are asymptotically equivalent.
Abstract: In this paper, we present a new budget allocation framework for the problem of selecting the best simulated design from a finite set of alternatives. The new framework is developed on the basis of general underlying distributions and a finite simulation budget. It adopts the expected opportunity cost (EOC) quality measure, which, compared to the traditional probability of correct selection (PCS) measure, penalizes a particularly bad choice more than a slightly incorrect selection, and is thus preferred by risk-neutral practitioners and decision makers. To this end, we establish a closed-form approximation of EOC to formulate the budget allocation problem and derive the corresponding optimality conditions. A sequential budget allocation algorithm is then developed for implementation. The efficiency of the proposed method is illustrated via numerical experiments. We also link the EOC and PCS-based budget allocation problems by showing that the two are asymptotically equivalent. This result explains, to some...

Journal ArticleDOI
TL;DR: In this paper, a new general class of unbiased estimators, which admits previous debiasing schemes as special cases, is proposed, which are stratified versions of earlier unbiased schemes and shown to be asymptotically as efficient as MLMC, both in terms of variance and cost.
Abstract: Multilevel Monte Carlo (MLMC) and recently proposed unbiased estimators are closely related. This connection is elaborated by presenting a new general class of unbiased estimators, which admits previous debiasing schemes as special cases. New lower variance estimators are proposed, which are stratified versions of earlier unbiased schemes. Under general conditions, essentially when MLMC admits the canonical square root Monte Carlo error rate, the proposed new schemes are shown to be asymptotically as efficient as MLMC, both in terms of variance and cost. The experiments demonstrate that the variance reduction provided by the new schemes can be substantial.

Journal ArticleDOI
TL;DR: It is proved that the stochastic clearing formulation proposed by Pritchard et al. (2010) yields price distortions that are bounded by the bid prices, and it is shown that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network.
Abstract: We argue that deterministic market clearing formulations introduce arbitrary distortions between day-ahead and expected real-time prices that bias economic incentives. We extend and analyze a previously proposed stochastic clearing formulation in which the social surplus function induces penalties between day-ahead and real-time quantities. We prove that the formulation yields price bounded price distortions, and we show that adding a similar penalty term to transmission flows and phase angles ensures boundedness throughout the network. We prove that when the price distortions are zero, day-ahead quantities equal a quantile of their real-time counterparts. The undesired effects of price distortions suggest that stochastic settings provide significant benefits over deterministic ones that go beyond social surplus improvements. We propose additional metrics to evaluate these benefits.

Journal ArticleDOI
TL;DR: This paper proposes a new type of optimization approach for product line design under uncertainty, based on the paradigm of robust optimization where, rather than optimizing the expected revenue with respect to a single model, one optimizes the worst-case expected revenuewith respect to an uncertainty set of models.
Abstract: The majority of approaches to product line design that have been proposed by marketing scientists assume that the underlying choice model that describes how the customer population will respond to a new product line is known precisely. In reality, however, marketers do not precisely know how the customer population will respond and can only obtain an estimate of the choice model from limited conjoint data. In this paper, we propose a new type of optimization approach for product line design under uncertainty. Our approach is based on the paradigm of robust optimization where, rather than optimizing the expected revenue with respect to a single model, one optimizes the worst-case expected revenue with respect to an uncertainty set of models. This framework allows us to account for parameter uncertainty, when we may be confident about the type of model structure but not about the values of the parameters, and structural uncertainty, when we may not even be confident about the right model structure to use to...

Journal ArticleDOI
TL;DR: In this computational study, two methods for implementing GSP on parallel computers are discussed, namely the Message-Passing Interface and Hadoop MapReduce, and the latter provides good protection against core failures at the expense of a significant drop in utilization due to periodic unavoidable synchronization.
Abstract: The goal of ranking and selection (R&S) procedures is to identify the best stochastic system from among a finite set of competing alternatives. Such procedures require constructing estimates of each system’s performance, which can be obtained simultaneously by running multiple independent replications on a parallel computing platform. Nontrivial statistical and implementation issues arise when designing R&S procedures for a parallel computing environment. We propose several design principles for parallel R&S procedures that preserve statistical validity and maximize core utilization, especially when large numbers of alternatives or cores are involved. These principles are followed closely by our parallel Good Selection Procedure (GSP), which, under the assumption of normally distributed output, (i) guarantees to select a system in the indifference zone with high probability, (ii) in tests on up to 1,024 parallel cores runs efficiently, and (iii) in an example uses smaller sample sizes compared to existing...

Journal ArticleDOI
TL;DR: In this paper, an inverse optimization-based methodology is proposed to determine market structure from commodity and transportation prices in locational marginal price-based electricity markets where prices are shadow prices in the centralized optimization used to clear the market.
Abstract: We propose an inverse optimization-based methodology to determine market structure from commodity and transportation prices. The methods are appropriate for locational marginal price-based electricity markets where prices are shadow prices in the centralized optimization used to clear the market. We apply the inverse optimization methodology to outcome data from the Midcontinent ISO electricity market (MISO) and, under noise-free assumptions, recover parameters of transmission and related constraints that are not revealed to market participants but explain the price variation. We demonstrate and evaluate analytical uses of the recovered structure including reconstruction of the pricing mechanism and investigations of locational market power through the transmission constrained residual demand derivative. Prices generated from the reconstructed mechanism are highly correlated to actual MISO prices under a wide variety of market conditions. In a case study, the residual demand derivative is shown to be corr...

Journal ArticleDOI
TL;DR: A data-driven robust optimization method is developed that solves large-scale real-sized versions of this model close to optimality and is validated and implemented as a decision support system at the UCLA Ronald Reagan Medical Center.
Abstract: We consider the problem of minimizing daily expected resource usage and overtime costs across multiple parallel resources such as anesthesiologists and operating rooms, which are used to conduct a variety of surgical procedures at large multispecialty hospitals. To address this problem, we develop a two-stage, mixed-integer stochastic dynamic programming model with recourse. The first stage allocates these resources across multiple surgeries with uncertain durations and prescribes the sequence of surgeries to these resources. The second stage determines actual start times to surgeries based on realized durations of preceding surgeries and assigns overtime to resources to ensure all surgeries are completed using the allocation and sequence determined in the first stage. We develop a data-driven robust optimization method that solves large-scale real-sized versions of this model close to optimality. We validate and implement this model as a decision support system at the UCLA Ronald Reagan Medical Center. T...

Journal ArticleDOI
TL;DR: A decomposition technique for portfolio risk measurement is proposed, through which a high-dimensional problem may be decomposed into low-dimensional ones that allow an efficient use of the kernel smoothing approach.
Abstract: Nested estimation involves estimating an expectation of a function of a conditional expectation via simulation. This problem has of late received increasing attention amongst researchers due to its...

Journal ArticleDOI
TL;DR: In this paper, the authors study a multiserver model with n flexible servers and n queues, connected through a bipartite graph, where the level of flexibility is captured by an upper bound on the graph's average degree.
Abstract: We study a multiserver model with n flexible servers and n queues, connected through a bipartite graph, where the level of flexibility is captured by an upper bound on the graph’s average degree, dn. Applications in content replication in data centers, skill-based routing in call centers, and flexible supply chains are among our main motivations. We focus on the scaling regime where the system size n tends to infinity, while the overall traffic intensity stays fixed. We show that a large capacity region and an asymptotically vanishing queueing delay are simultaneously achievable even under limited flexibility (dn ≪ n). Our main results demonstrate that, when dn ≫ ln n, a family of expander-graph-based flexibility architectures has a capacity region that is within a constant factor of the maximum possible, while simultaneously ensuring a diminishing queueing delay for all arrival rate vectors in the capacity region. Our analysis is centered around a new class of virtual-queue-based scheduling policies that...

Journal ArticleDOI
TL;DR: A structural neighborhood decomposition for arc routing problems, in which the decisions about traversal orientations during services are made optimally as part of neighbor evaluation procedures, is explored.
Abstract: This article explores a structural neighborhood decomposition for arc routing problems, in which the decisions about traversal orientations during services are made optimally as part of neighbor evaluation procedures Using memory structures, bidirectional dynamic programming, and lower bounds, we show that a large neighborhood involving classical moves on the sequences of services along with optimal orientation decisions can be searched in amortized O(1) time per move evaluation instead of O(n) as in previous works Because of its generality and now-reduced complexity, this approach can be efficiently applied to several variants of arc routing problems, extended into large neighborhoods such as ejection chains, and integrated into two classical metaheuristics Our computational experiments lead to solutions of high quality on the main benchmark sets for the capacitated arc routing problem (CARP), the mixed capacitated general routing problem (MCGRP), the periodic CARP, the multidepot CARP, and the min-ma

Journal ArticleDOI
TL;DR: This paper explores the possibility of exact simulation for the SABR model and proposes an exact simulation method for the forward price and its volatility in two special but practically interesting cases, i.e., when the elasticity β = 1 and when β < 1 and the price and volatility processes are instantaneously uncorrelated.
Abstract: The stochastic alpha-beta-rho SABR model becomes popular in the financial industry because it is capable of providing good fits to various types of implied volatility curves observed in the marketplace. However, no analytical solution to the SABR model exists that can be simulated directly. This paper explores the possibility of exact simulation for the SABR model. Our contribution is threefold. i We propose an exact simulation method for the forward price and its volatility in two special but practically interesting cases, i.e., when the elasticity β = 1, or when β < 1 and the price and volatility processes are instantaneously uncorrelated. Primary difficulties involved are how to simulate two random variables whose distributions can be expressed in terms of the Hartman-Watson and the noncentral chi-squared distribution functions, respectively. Two novel simulation schemes are proposed to achieve numerical accuracy, efficiency, and stability. One stems from numerical Laplace inversion and Asian option literature, and the other is based on recent developments in evaluating the noncentral chi-squared distribution functions in a robust way. Numerical examples demonstrate that our method is fast and accurate under various market environments. ii When β < 1 but the price and volatility processes are correlated, our simulation method becomes a semi-exact one. Numerical results suggest that it is still quite accurate when the time horizon is not long, e.g., no greater than one year. For long time horizons, a piecewise semi-exact simulation scheme is developed that reduces the biases substantially. iii For European option pricing under the SABR model, we propose a conditional simulation method, which reduces the variance of the plain simulation significantly, e.g., by more than 99%. The e-companion is available at https://doi.org/10.1287/opre.2017.1617 .

Journal ArticleDOI
TL;DR: The EM method is applied to jointly estimate the arrival rate of customers and the pmf of the rank-based choice model, and it is shown that it leads to a simple and highly ecient estimation procedure.
Abstract: We propose an expectation-maximization (EM) method to estimate customer preferences for a category of products using only sales transaction and product availability data. The demand model combines a general, rank-based discrete choice model of preferences with a Bernoulli process of customer arrivals over time. The discrete choice model is defined by a probability mass function (pmf) on a given set of preference rankings of alternatives, including the no-purchase alternative. Each customer is represented by a preference list, and when faced with a given choice set is assumed to either purchase the available option that ranks highest in her preference list, or not purchase at all if no available product ranks higher than the no-purchase alternative. We apply the EM method to jointly estimate the arrival rate of customers and the pmf of the rank-based choice model, and show that it leads to a remarkably simple and highly efficient estimation procedure. All limit points of the procedure are provably stationa...

Journal ArticleDOI
TL;DR: It is of both theoretical and practical interests to extend the normative studies on the MNL and NL models to the PCL model and examine the pricing problem under this model, which overcomes restrictions of the well-studied multinomial logit and nested logit models.
Abstract: In this paper, we study price optimization with price-demand relationships captured by the paired combinatorial logit (PCL) model, which overcomes restrictions of the well-studied multinomial logit (MNL) and nested logit (NL) models. The PCL model allows for choice-correlation and, like the NL model, includes the MNL model as a special case. Compared to the NL models, the PCL model does not restrict the sequence of the choice structure and allows for different covariances among all pairs of choices. This additional flexibility in structure enables a more accurate representation of some choice settings and broadens its empirical applications. Hence, it is of both theoretical and practical interests to extend the normative studies on the MNL and NL models to the PCL model and examine the pricing problem under this model. Due to cross-nesting of choice alternatives, the pricing problem under the PCL model poses a greater challenge than the MNL and NL models. However, using the concept of P-matrix, we are abl...