scispace - formally typeset
Search or ask a question

Showing papers in "Computational Management Science in 2016"


Journal ArticleDOI
TL;DR: The asymptotic behavior of the distribution set is proved and the relationship between the model and other distributionally robust models is established and to test the performance of the model, it is applied to the newsvendor problem and the portfolio selection problem.
Abstract: We consider optimal decision-making problems in an uncertain environment. In particular, we consider the case in which the distribution of the input is unknown, yet there is some historical data drawn from the distribution. In this paper, we propose a new type of distributionally robust optimization model called the likelihood robust optimization (LRO) model for this class of problems. In contrast to previous work on distributionally robust optimization that focuses on certain parameters (e.g., mean, variance, etc.) of the input distribution, we exploit the historical data and define the accessible distribution set to contain only those distributions that make the observed data achieve a certain level of likelihood. Then we formulate the targeting problem as one of optimizing the expected value of the objective function under the worst-case distribution in that set. Our model avoids the over-conservativeness of some prior robust approaches by ruling out unrealistic distributions while maintaining robustness of the solution for any statistically likely outcomes. We present statistical analyses of our model using Bayesian statistics and empirical likelihood theory. Specifically, we prove the asymptotic behavior of our distribution set and establish the relationship between our model and other distributionally robust models. To test the performance of our model, we apply it to the newsvendor problem and the portfolio selection problem. The test results show that the solutions of our model indeed have desirable performance.

179 citations


Journal ArticleDOI
TL;DR: In this article, the authors compare reformulation to a deterministic optimization problem and an iterative cutting-plane method for robust linear optimization (RLO) and robust mixed-integer optimization (RMIO) problems.
Abstract: Robust optimization (RO) is a tractable method to address uncertainty in optimization problems where uncertain parameters are modeled as belonging to uncertainty sets that are commonly polyhedral or ellipsoidal. The two most frequently described methods in the literature for solving RO problems are reformulation to a deterministic optimization problem or an iterative cutting-plane method. There has been limited comparison of the two methods in the literature, and there is no guidance for when one method should be selected over the other. In this paper we perform a comprehensive computational study on a variety of problem instances for both robust linear optimization (RLO) and robust mixed-integer optimization (RMIO) problems using both methods and both polyhedral and ellipsoidal uncertainty sets. We consider multiple variants of the methods and characterize the various implementation decisions that must be made. We measure performance with multiple metrics and use statistical techniques to quantify certainty in the results. We find for polyhedral uncertainty sets that neither method dominates the other, in contrast to previous results in the literature. For ellipsoidal uncertainty sets we find that the reformulation is better for RLO problems, but there is no dominant method for RMIO problems. Given that there is no clearly dominant method, we describe a hybrid method that solves, in parallel, an instance with both the reformulation method and the cutting-plane method. We find that this hybrid approach can reduce runtimes to 50–75 % of the runtime for any one method and suggest ways that this result can be achieved and further improved on.

92 citations


Journal ArticleDOI
TL;DR: A general decomposition framework to solve exactly adjustable robust linear optimization problems subject to polytope uncertainty and shows that the relative performance of the algorithms depend on whether the budget is integer or fractional.
Abstract: We present in this paper a general decomposition framework to solve exactly adjustable robust linear optimization problems subject to poly-tope uncertainty. Our approach is based on replacing the polytope by the set of its extreme points and generating the extreme points on the fly within row generation or column-and-row generation algorithms. The novelty of our approach lies in formulating the separation problem as a feasibility problem instead of a max-min problem as done in recent works. Applying the Farkas lemma, we can reformulate the separation problem as a bilinear program, which is then linearized to obtained a mixed-integer linear programming formulation. We compare the two algorithms on a robust telecommunications network design under demand uncertainty and budgeted uncertainty polytope. Our results show that the relative performance of the algorithms depend on whether the budget is integer or fractional.

58 citations


Journal ArticleDOI
TL;DR: A heuristic scenario reduction method termed forward selection in recourse clusters (FSRC), which selects scenarios based on their cost and reliability impacts, is presented to alleviate the computational burden of a two-stage stochastic program.
Abstract: A two-stage stochastic program is formulated for day-ahead commitment of thermal generating units to minimize total expected cost considering uncertainties in the day-ahead load and the availability of variable generation resources. Commitments of thermal units in the stochastic reliability unit commitment are viewed as first-stage decisions, and dispatch is relegated to the second stage. It is challenging to solve such a stochastic program if many scenarios are incorporated. A heuristic scenario reduction method termed forward selection in recourse clusters (FSRC), which selects scenarios based on their cost and reliability impacts, is presented to alleviate the computational burden. In instances down-sampled from data for an Independent System Operator in the US, FSRC results in more reliable commitment schedules having similar costs, compared to those from a scenario reduction method based on probability metrics. Moreover, in a rolling horizon study, FSRC preserves solution quality even if the reduction is substantial.

49 citations


Journal ArticleDOI
TL;DR: A stochastic dynamic programming model that co-optimizes multiple uses of distributed energy storage, including energy and ancillary service sales, backup capacity, and transformer loading relief, while accounting for market and system uncertainty is introduced.
Abstract: We introduce a stochastic dynamic programming (SDP) model that co-optimizes multiple uses of distributed energy storage, including energy and ancillary service sales, backup capacity, and transformer loading relief, while accounting for market and system uncertainty. We propose an approximation technique to efficiently solve the SDP. We also use a case study with high residential loads to demonstrate that a deployment consisting of both storage and transformer upgrades decreases costs and increases value relative to a transformer-only deployment.

36 citations


Journal ArticleDOI
TL;DR: In this paper monotonic lower and upper bounds for the optimal objective value of a multistage stochastic program are provided and results on a real case transportation problem are presented.
Abstract: Multistage stochastic programs bring computational complexity which may increase exponentially with the size of the scenario tree in real case problems. For this reason approximation techniques which replace the problem by a simpler one and provide lower and upper bounds to the optimal value are very useful. In this paper we provide monotonic lower and upper bounds for the optimal objective value of a multistage stochastic program. These results also apply to stochastic multistage mixed integer linear programs. Chains of inequalities among the new quantities are provided in relation to the optimal objective value, the wait-and-see solution and the expected result of using the expected value solution. The computational complexity of the proposed lower and upper bounds is discussed and an algorithmic procedure to use them is provided. Numerical results on a real case transportation problem are presented.

33 citations


Journal ArticleDOI
TL;DR: A real options approach to evaluate the profitability of investing in a battery bank finds the real options value is higher than the NPV, confirming the value of flexible investment timing when both revenues and investment cost are uncertain.
Abstract: In this paper we develop a real options approach to evaluate the profitability of investing in a battery bank. The approach determines the optimal investment timing under conditions of uncertain future revenues and investment cost. It includes time arbitrage of the spot price and profits by providing ancillary services. Current studies of battery banks are limited, because they do not consider the uncertainty and the possibility of operating in both markets at the same time. We confirm previous research in the sense that when a battery bank participates in the spot market alone, the revenues are not sufficient to cover the initial investment cost. However, under the condition that the battery bank also can receive revenues from the balancing market, both the net present value (NPV) and the real options value are positive. The real options value is higher than the NPV, confirming the value of flexible investment timing when both revenues and investment cost are uncertain.

30 citations


Journal ArticleDOI
TL;DR: A deterministic strategic model for the valuation of electricity storage (a battery) is developed, and it is concluded that considering wind speed uncertainty can increase the estimated value of storage with up to 50 % relative to a deterministic estimate.
Abstract: The intermittent nature of wind energy generation has introduced a new degree of uncertainty to the tactical planning of energy systems. Short-term energy balancing decisions are no longer (fully) known, and it is this lack of knowledge that causes the need for strategic thinking. But despite this observation, strategic models are rarely set in an uncertain environment. And even if they are, the approach used is often inappropriate, based on some variant of scenario analysis—what-if analysis. In this paper we develop a deterministic strategic model for the valuation of electricity storage (a battery), and ask: “Though leaving out wind speed uncertainty clearly is a simplification, does it really matter for the valuation of storage?”. We answer this question by formulating a stochastic programming model, and compare its valuation to that of its deterministic counterpart. Both models capture the arbitrage value of storage, but only the stochastic model captures the battery value stemming from wind speed uncertainty. Is the difference important? The model is tested on a case from Lancaster University’s campus energy system where a wind turbine is installed. From our analysis, we conclude that considering wind speed uncertainty can increase the estimated value of storage with up to 50 % relative to a deterministic estimate. However, we also observe cases where wind speed uncertainty is insignificant for storage valuation.

19 citations


Journal ArticleDOI
TL;DR: This work argues that in practice the uncertain parameters rarely take simultaneously the values of the worst-case scenario, and thus introduces a new performance measure based on simulated average values, the adjustable RO (AARC), which consistently guarantees a worst profit over the entire uncertainty set.
Abstract: Robust optimization (RO) is a distribution-free worst-case solution methodology designed for uncertain maximization problems via a max-min approach considering a bounded uncertainty set. It yields a feasible solution over this set with a guaranteed worst-case value. As opposed to a previous conception that RO is conservative based on optimal value analysis, we argue that in practice the uncertain parameters rarely take simultaneously the values of the worst-case scenario, and thus introduce a new performance measure based on simulated average values. To this end, we apply the adjustable RO (AARC) to a single new product multi-period production planning problem under an uncertain and bounded demand so as to maximize the total profit. The demand for the product is assumed to follow a typical life-cycle pattern, whose length is typically hard to anticipate. We suggest a novel approach to predict the production plan’s profitable cycle length, already at the outset of the planning horizon. The AARC is an offline method that is employed online and adjusted to past realizations of the demand by a linear decision rule (LDR). We compare it to an alternative offline method, aiming at maximum expected profit, applying the same LDR. Although the AARC maximizes the profit against a worst-case demand scenario, our empirical results show that the average performance of both methods is very similar. Further, AARC consistently guarantees a worst profit over the entire uncertainty set, and its model’s size is considerably smaller and thus exhibit superior performance.

16 citations


Journal ArticleDOI
TL;DR: A model for analyzing the upgrade of the national transmission grid that explicitly accounts for responses given by the power producers in terms of generation unit expansion is introduced.
Abstract: We introduce a model for analyzing the upgrade of the national transmission grid that explicitly accounts for responses given by the power producers in terms of generation unit expansion. The problem is modeled as a bilevel program with a mixed integer structure in both upper and lower level. The upper level is defined by the transmission company problem which has to decide on how to upgrade the network. The lower level models the reactions of both power producers, who take a decision on new facilities and power output, and Market Operator, which strikes a new balance between demand and supply, providing new Locational Marginal Prices. We illustrate our methodology by means of an example based on the Garver’s 6-bus Network.

14 citations


Journal ArticleDOI
TL;DR: A new dual ascent method is suggested for optimizing both the semi-Lagrangian dual function as well as its simplified form for the case of a generic discrete facility location problem and apply the method to the uncapacitated facility locations problem.
Abstract: The literature knows semi-Lagrangian relaxation as a particular way of applying Lagrangian relaxation to certain linear mixed integer programs such that no duality gap results. The resulting Lagrangian subproblem usually can substantially be reduced in size. The method may thus be more efficient in finding an optimal solution to a mixed integer program than a “solver” applied to the initial MIP formulation, provided that “small” optimal multiplier values can be found in a few iterations. Recently, a simplification of the semi-Lagrangian relaxation scheme has been suggested in the literature. This “simplified” approach is actually to apply ordinary Lagrangian relaxation to a reformulated problem and still does not show a duality gap, but the Lagrangian dual reduces to a one-dimensional optimization problem. The expense of this simplification is, however, that the Lagrangian subproblem usually can not be reduced to the same extent as in the case of ordinary semi-Lagrangian relaxation. Hence, an effective method for optimizing the Lagrangian dual function is of utmost importance for obtaining a computational advantage from the simplified Lagrangian dual function. In this paper, we suggest a new dual ascent method for optimizing both the semi-Lagrangian dual function as well as its simplified form for the case of a generic discrete facility location problem and apply the method to the uncapacitated facility location problem. Our computational results show that the method generally only requires a very few iterations for computing optimal multipliers. Moreover, we give an interesting economic interpretation of the semi-Lagrangian multiplier(s).

Journal ArticleDOI
TL;DR: This work designs and implements a version of the viability kernel algorithm suitable for General Purpose GPU (GPGPU) computing using Nvidia’s architecture, CUDA (Computing Unified Device Architecture).
Abstract: Computing a viability kernel consumes time and memory resources which increase exponentially with the dimension of the problem. This curse of dimensionality strongly limits the applicability of this approach, otherwise promising. We report here an attempt to tackle this problem with Graphics Processing Units (GPU). We design and implement a version of the viability kernel algorithm suitable for General Purpose GPU (GPGPU) computing using Nvidia’s architecture, CUDA (Computing Unified Device Architecture). Different parts of the algorithm are parallelized on the GPU device and we test the algorithm on a dynamical system of theoretical population growth. We study computing time gains as a function of the number of dimensions and the accuracy of the grid covering the state space. The speed factor reaches up to 20 with the GPU version compared to the Central Processing Unit (CPU) version, making the approach more applicable to problems in 4 to 7 dimensions. We use the GPU version of the algorithm to compute viability kernel of bycatch fishery management problems up to 6 dimensions.

Journal ArticleDOI
TL;DR: This paper studies a retailer’s inventory and pricing decisions in an advance selling scenario that involves consumers who are strategic and describes the optimal inventory management and pricing policies.
Abstract: Advance selling of goods and services is a form of separating purchase from consumption. It is often employed when consumers are uncertain about their consumption utilities until a short time period before consumption. A book to be released, a concert to attend, or a cruise to take are some examples. Invariably, in consumers’ mind inventory availability (of copies, seats, or rooms) is a concern. In this paper we study a retailer’s inventory and pricing decisions in an advance selling scenario that involves consumers who are strategic. Some consumers not only consider advance and spot prices, but also the uncertainty in future availability of the product (during the spot period) and in their consumption utility from it. We characterize the optimal inventory management and pricing policies, and discuss several interesting aspects of the solution. For example, it can be optimal for the retailer to limit advance sales even if there is more demand for it, and it can be optimal for the retailer to limit its inventory even though there is more capacity to keep it, but not both.

Journal ArticleDOI
TL;DR: The authors deepen the concept and use of customer lifetime value and present some mathematical models for its determination and provide a context less specific compared to papers, present in literature, on the customer lifetime.
Abstract: The customer lifetime value (CLV) is an important concept increasingly considered in the field of general marketing and in the management of firms, of organizations to increase the captured profitability. It represents the total value that a customer produces during his or her lifetime, or better represents the measure of the potential profit generating by a customer. The companies use the customer lifetime value to segment customers, analyze probability of churn, allocate resources or formulate strategies and, therefore, they increasingly derive revenue from the creation and from sustenance of long-term relationships with their customers. For this reason, the customer lifetime value is increasingly considered a touchstone for the management of customer relationships. In this article, the authors deepen the concept and use of customer lifetime value and present some mathematical models for its determination. There is many models for this purpose but most of them are theoretic, complex and not applicable. Though not exhaustive, the major contribution of this paper is that it provides a general mathematical formulation to estimate the CLV and that it has a context less specific compared to papers, present in literature, on the customer lifetime.

Journal ArticleDOI
TL;DR: In this article, the authors explore the properties of such a built-in hedge for a gas-fired power plant via a stochastic programming approach, which enables characterisation of uncertainty in both electricity and gas prices in deriving optimal hedging and generation decisions.
Abstract: Electricity industries worldwide have been restructured in order to introduce competition. As a result, decision makers are exposed to volatile electricity prices, which are positively correlated with those of natural gas in markets with price-setting gas-fired power plants. Consequently, gas-fired plants are said to enjoy a “natural hedge.” We explore the properties of such a built-in hedge for a gas-fired power plant via a stochastic programming approach, which enables characterisation of uncertainty in both electricity and gas prices in deriving optimal hedging and generation decisions. The producer engages in financial hedging by signing forward contracts at the beginning of the month while anticipating uncertainty in spot prices. Using UK energy price data from 2006 to 2011 and daily aggregated dispatch decisions of a typical gas-fired power plant, we find that such a producer does, in fact, enjoy a natural hedge, i.e., it is better off facing uncertain spot prices rather than locking in its generation cost. However, the natural hedge is not a perfect hedge, i.e., even modest risk aversion makes it optimal to use gas forwards partially. Furthermore, greater operational flexibility enhances this natural hedge as generation decisions provide a countervailing response to uncertainty. Conversely, higher energy-conversion efficiency reduces the natural hedge by decreasing the importance of natural gas price volatility and, thus, its correlation with the electricity price.

Journal ArticleDOI
TL;DR: This paper provides an estimate of efficient portfolios, compute the confidence region of the efficient frontier and get the prediction densities of the future efficient portfolio returns without distributional assumptions on returns.
Abstract: In this paper, we propose a bootstrap resampling methodology to obtain the confidence intervals for efficient portfolios weights and the sample characteristics of the mean-variance efficient frontier. We provide an estimate of efficient portfolios, compute the confidence region of the efficient frontier and get the prediction densities of the future efficient portfolio returns without distributional assumptions on returns. An extensive simulation study evaluates the finite-sample performance of these bootstrap intervals and stresses the advantages of such approach. Interestingly, the methodology can be easily modified to make inferences that incorporate our modelling of returns in the predictive efficient frontier estimation with or without additional managerial restrictions.

Journal ArticleDOI
TL;DR: In this article, the authors show that constraining the sparse norm of portfolio weights automatically controls diversification and selects portfolios with a small number of active weights and low risk, in presence of high correlation and volatility.
Abstract: High levels of correlation among financial assets and extreme losses are typical during crises. In such situations, investing in few assets might be a better choice than holding diversified portfolios. We show that constraining the sparse $$\ell _q$$ -norm of portfolio weights automatically controls diversification and selects portfolios with a small number of active weights and low risk, in presence of high correlation and volatility. We highlight the diversification relationships between the minimum variance portfolio, risk budgeting strategies and diversification-constrained portfolios. Finally, we show empirically that the $$\ell _q$$ -strategy can successfully cope with bear markets by shrinking portfolio weights and total amount of shorting.

Journal ArticleDOI
TL;DR: The welfare difference between each regime of monitoring for a fairly inclusive electricity generation model is defined and some predictions are formulated and it is found that the welfare loss from collective monitoring can be small if the constraints are tight.
Abstract: This paper investigates the costs of monitoring of a distributed multi-agent economic activity in the presence of constraints on the agents’ joint outputs. If the regulator monitors agents individually she calculates each agent’s optimal contribution to the constrains by solving a constrained welfare-maximisation problem. This will maximise welfare but may be expensive because monitoring cost rises with the number of agents. Alternatively, the regulator could monitor agents collectively, using a detector, or detectors, to observe if each constraint is jointly satisfied. This will ease implementation cost, but lower welfare. We define the welfare difference between each regime of monitoring for a fairly inclusive electricity generation model and formulate some predictions. The behaviour of two generators in a coupled-constrained, three-node case study reproduces these predictions. We find that the welfare loss from collective monitoring can be small if the constraints are tight. We also learn that, under either regime, the imposition of transmission and environmental restrictions may benefit the less efficient generator and shift surplus share towards the emitters, decreasing consumer surplus.

Journal ArticleDOI
TL;DR: This work investigates the efficiency of linearizing the second-order cone constraints of a linear program with ellipsoidal uncertainty set using the optimal linear outer-approximation approach by Ben-Tal and Nemirovski from which an optimal inner approximation of thesecond-order cones is derived.
Abstract: Robust optimization is an important technique to immunize optimization problems against data uncertainty. In the case of a linear program and an ellipsoidal uncertainty set, the robust counterpart turns into a second-order cone program. In this work, we investigate the efficiency of linearizing the second-order cone constraints of the latter. This is done using the optimal linear outer-approximation approach by Ben-Tal and Nemirovski (Math Oper Res 26:193–205, 2001) from which we derive an optimal inner approximation of the second-order cone. We examine the performance of this approach on various benchmark sets including portfolio optimization instances as well as (robustified versions of) the MIPLIB and the SNDlib.

Journal ArticleDOI
TL;DR: Under some moderate conditions, it is demonstrated that with probability approaching to 1 at an exponential rate with the increase of sample size, the optimal solution of the smoothing SAA problem converges to its true counterpart.
Abstract: In this paper, we develop a stochastic programming model for economic dispatch of a power system with operational reliability and risk control constraints. By defining a severity-index function, we propose to use conditional value-at-risk (CVaR) for measuring the reliability and risk control of the system. The economic dispatch is subsequently formulated as a stochastic program with CVaR constraint. To solve the stochastic optimization model, we propose a penalized sample average approximation (SAA) scheme which incorporates specific features of smoothing technique and level function method. Under some moderate conditions, we demonstrate that with probability approaching to 1 at an exponential rate with the increase of sample size, the optimal solution of the smoothing SAA problem converges to its true counterpart. Numerical tests have been carried out for a standard IEEE-30 DC power system.

Journal ArticleDOI
TL;DR: Several integer programming (IP) formulations are proposed to exactly solve the minimum-cost λ-edge-connected k-subgraph problem, or the (k,λ)-sub graph problem, based on its graph properties, and some special graph properties are proved to find stronger and more compact IP formulations.
Abstract: In this paper, we propose several integer programming (IP) formulations to exactly solve the minimum-cost $$\lambda $$ -edge-connected k-subgraph problem, or the $$(k,\lambda )$$ -subgraph problem, based on its graph properties. Special cases of this problem include the well-known k-minimum spanning tree problem (if $$\lambda =1$$ ), $$\lambda $$ -edge-connected spanning subgraph problem (if $$k=|V|$$ ) and k-clique problem (if $$\lambda = k-1$$ and there are exact k vertices in the subgraph). As a generalization of k-minimum spanning tree and a case of the $$(k,\lambda )$$ -subgraph problem, the (k, 2)-subgraph problem is studied, and some special graph properties are proved to find stronger and more compact IP formulations. Additionally, we study the valid inequalities for these IP formulations. Numerical experiments are performed to compare proposed IP formulations and inequalities.

Journal ArticleDOI
TL;DR: This work considers the problem where a manager aims to minimize the probability of his portfolio return falling below a threshold while keeping the expected return no worse than a target, under the assumption that stock returns are Log-Normally distributed and proposes a two-stage solution approach.
Abstract: We consider the problem where a manager aims to minimize the probability of his portfolio return falling below a threshold while keeping the expected return no worse than a target, under the assumption that stock returns are Log-Normally distributed. This assumption, common in the finance literature for daily and weekly returns, creates computational difficulties because the distribution of the portfolio return is difficult to estimate precisely. We approximate it with a single Log-Normal random variable using the Fenton–Wilkinson method and investigate an iterative, data-driven approximation to the problem. We propose a two-stage solution approach, where the first stage requires solving a classic mean-variance optimization model and the second step involves solving an unconstrained nonlinear problem with a smooth objective function. We suggest an iterative calibration method to improve the accuracy of the method and test its performance against a Generalized Pareto Distribution approximation. We also extend our results to the design of basket options.

Journal ArticleDOI
TL;DR: A robust optimization approach that incorporates uncertainty on the Bass diffusion model for new products as well as on the price response function of partners that collaborate with the company in order to bring its products to market and can be reformulated as a mixed integer linear programming problem.
Abstract: We consider a problem where a company must decide the order in which to launch new products within a given time horizon and budget constraints, and where the parameters of the adoption rate of these new products are subject to uncertainty. This uncertainty can bring significant change to the optimal launch sequence. We present a robust optimization approach that incorporates such uncertainty on the Bass diffusion model for new products as well as on the price response function of partners that collaborate with the company in order to bring its products to market. The decision-maker optimizes his worst-case profit over an uncertainty set where nature chooses the time periods in which (integer) units of the budgets of uncertainty are used for worst impact. This leads to uncertainty sets with binary variables. We show that a conservative approximation of the robust problem can nonetheless be reformulated as a mixed integer linear programming problem, is therefore of the same structure as the deterministic problem and can be solved in a tractable manner. Finally, we illustrate our approach on numerical experiments. Our model also incorporates contracts with potential commercialization partners. The key output of our work is a sequence of product launch times that protects the decision-maker against parameter uncertainty for the adoption rates of the new products and the response of potential partners to partnership offers.

Journal ArticleDOI
TL;DR: An approach to the data-driven newsvendor problem that incorporates a correction factor to account for rare events is proposed, when the decision-maker has few historical data points at his disposal but knows the range of the demand.
Abstract: We propose an approach to the data-driven newsvendor problem that incorporates a correction factor to account for rare events, when the decision-maker has few historical data points at his disposal but knows the range of the demand. This mitigates a weakness of pure data-driven methodologies, specifically, the fact that they under-protect the system against tail events, which are in general under-observed in the empirical demand distribution. We test the approach in extensive computational experiments and provide a summary table of the numerical experiments to help the decision maker gain further insights.

Journal ArticleDOI
TL;DR: Along with deregulation, decarbonisation has more recently begun to transform the energy sector and policymakers in most industrialised countries have passed legislation to support investment in renewable energy technologies as well as to facilitate greater flexibility on the demand side.
Abstract: Historically state regulated, many industries within the energy sector, e.g., electricity and natural gas, have been gradually liberalised over the past three decades in an effort to improve economic efficiency (Wilson 2002). Instead of a vertically integrated paradigm with regulated monopolies that controlled both production and retail interests, decision makers within the energy sector must now cope with uncertain prices and competition from rivals. Exposure to such market forces was thought to encourage firms and consumers to foster technological innovation and to undertake efficiency investments, respectively. However, the transition towards a deregulated energy sector also meant that conventional, single-agent models for supporting investment and operational decisions may no longer be adequate. Indeed, in a survey of modelling tools for the electric power industry, Hobbs (1995) foresaw the need for a better representation of uncertainty and strategic interactions. Along with deregulation, decarbonisation has more recently begun to transform the energy sector. In an attempt to mitigate the impact of climate change, policymakers in most industrialised countries have passed legislation to support investment in renewable energy technologies as well as to facilitate greater flexibility on the demand side