scispace - formally typeset
Search or ask a question

Showing papers in "Journal of Computational Finance in 2013"


Journal ArticleDOI
TL;DR: In this article, the Fourier cosine expansion method is used for the pricing of swing options in a Levy process model, where contracts can be exercised at any time before the end of the contract and more than once.
Abstract: Swing options give contract holders the right to modify amounts of future delivery of certain commodities, such as electricity or gas. We assume that these options can be exercised at any time before the end of the contract, and more than once. However, a recovery time between any two consecutive exercise dates is incorporated as a constraint to avoid continuous exercise. We introduce an efficient way of pricing these swing options, based on the Fourier cosine expansion method, which is especially suitable when the underlying is modeled by a Levy process.

29 citations


Journal ArticleDOI
TL;DR: In this paper, the problem of pricing American-type compound options when the underlying dynamics follow Heston's stochastic volatility and with a low interest rate driven by Cox-Ingersoll-Ross processes is considered.
Abstract: A compound option (the mother option) gives the holder the right, but not the obligation, to buy (long) or sell (short) the underlying option (the daughter option). In this paper, we consider the problem of pricing American-type compound options when the underlying dynamics follow Heston’s stochastic volatility and with stochastic interest rate driven by Cox–Ingersoll–Ross processes. We use a partial differential equation (PDE) approach to obtain a numerical solution. The problem is formulated as the solution to a two-pass free-boundary PDE problem, which is solved via a sparse grid approach and is found to be accurate and efficient compared with the results from a benchmark solution based on a least-squares Monte Carlo simulation combined with the projected successive over-relaxation method.

28 citations


Journal ArticleDOI
TL;DR: In this paper, the authors derive analytical solutions for minimizing the expected execution cost under discrete jump diffusion models, where random jump amplitudes capture uncertain permanent price impact of other large buy and sell trades.
Abstract: In the execution cost problem, an investor wants to minimize the total expected cost and risk in the execution of a portfolio of risky assets to achieve desired positions. A major source of the execution cost comes from price impacts of both the investor’s own trades and other concurrent institutional trades. Indeed price impact of large trades have been considered as one of the main reasons for fat tails of the short term return’s probability distribution function. However, current models in the literature on the execution cost problem typically assume normal distributions. This assumption fails to capture the characteristics of tail distributions due to institutional trades. In this paper we provide arguments that compound jump diffusion processes naturally model uncertain price impact of other large trades. This jump diffusion model includes two compound Poisson processes where random jump amplitudes capture uncertain permanent price impact of other large buy and sell trades. Using stochastic dynamic programming, we derive analytical solutions for minimizing the expected execution cost under discrete jump diffusion models. Our results indicate that, when the expected market price change is nonzero, likely due to large trades, assumptions on the market price model, and values of mean and covariance of the market price change can have significant impact on the optimal execution strategy. Using simulations, we computationally illustrate minimum CVaR execution strategies under different models. Furthermore, we analyze qualitative and quantitative differences of the expected execution cost and risk between optimal execution strategies, determined under a multiplicative jump diffusion model and an additive jump diffusion model.

27 citations


Journal ArticleDOI
TL;DR: This work presents an alternative solution and shows how to adapt a MC algorithm in such a way that its results can be stably differentiated by simple finite differences and possesses a significantly reduced variance.
Abstract: We consider the pricing of a special kind of options, the so-called autocallables, which may terminate prior to maturity due to a barrier condition on one or several underlyings. Standard Monte Carlo (MC) algorithms work well for pricing these options but they do not behave stable with respect to numerical differentiation. Hence, to calculate sensitivities, one would typically resort to regularized differentiation schemes or derive an algorithm for directly calculating the derivative. In this work we present an alternative solution and show how to adapt a MC algorithm in such a way that its results can be stably differentiated by simple finite differences. Our main tool is the one-step survival idea of Glasserman and Staum which we combine with a technique known as GHK Importance Sampling for treating multiple underlyings. Besides the stability with respect to differentiation our new algorithm also possesses a significantly reduced variance and does not require evaluations of multivariate cumulative normal distributions.

18 citations


Journal ArticleDOI
TL;DR: Taking advantage of the lag variable tracking the time interval between trades, this paper can provide an explicit backward numerical scheme for the time discretization of the DPQVI, and provides some numerical tests of sensitivity with respect to the bid/ask spread and market impact parameters.
Abstract: This paper deals with numerical solutions to an impulse control problem arising from optimal portfolio liquidation with bid-ask spread and market price impact pena\-lizing speedy execution trades. The corresponding dynamic programming (DP) equation is a quasi-variational inequality (QVI) with solvency constraint satisfied by the value function in the sense of constrained viscosity solutions. By taking advantage of the lag variable tracking the time interval between trades, we can provide an explicit backward numerical scheme for the time discretization of the DPQVI. The convergence of this discrete-time scheme is shown by viscosity solutions arguments. An optimal quantization method is used for computing the (conditional) expectations arising in this scheme. Numerical results are presented by examining the behaviour of optimal liquidation strategies, and comparative performance analysis with respect to some benchmark execution strategies. We also illustrate our optimal liquidation algorithm on real data, and observe various interesting patterns of order execution strategies. Finally, we provide some numerical tests of sensitivity with respect to the bid/ask spread and market impact parameters.

16 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the discretized version of a two-factor model introduced by Benth and coauthors for the electricity markets and provided an algorithm based on the celebrated Foellmer-Schweizer decomposition for solving the mean-variance hedging problem.
Abstract: We consider the discretized version of a (continuous-time) two-factor model introduced by Benth and coauthors for the electricity markets. For this model, the underlying is the exponent of a sum of independent random variables. We provide and test an algorithm, which is based on the celebrated Foellmer-Schweizer decomposition for solving the mean-variance hedging problem. In particular, we establish that decomposition explicitely, for a large class of vanilla contingent claims. Interest is devoted in the choice of rebalancing dates and its impact on the hedging error, regarding the payoff regularity and the non stationarity of the log-price process.

13 citations


Journal ArticleDOI
TL;DR: In this article, the authors proposed a scheme based on the Milstein discretization of the SDE with order one of weak trajectorial convergence for the asset price, and a scheme, based on a Ninomiya-Victoir discretisation of this SDE, with order two of weak convergence for asset price.
Abstract: In usual stochastic volatility models, the process driving the volatility of the asset price evolves according to an autonomous one-dimensional stochastic differential equation. We assume that the coefficients of this equation are smooth. Using Ito's formula, we get rid, in the asset price dynamics, of the stochastic integral with respect to the Brownian motion driving this SDE. Taking advantage of this structure, we propose - a scheme, based on the Milstein discretization of this SDE, with order one of weak trajectorial convergence for the asset price, - a scheme, based on the Ninomiya-Victoir discretization of this SDE, with order two of weak convergence for the asset price. We also propose a specific scheme with improved convergence properties when the volatility of the asset price is driven by an Orstein-Uhlenbeck process. We confirm the theoretical rates of convergence by numerical experiments and show that our schemes are well adapted to the multilevel Monte Carlo method introduced by Giles [2008a, 2008b].

12 citations


Journal ArticleDOI
TL;DR: In this paper, a numerical method for pricing Bermudan options on a large number of underlyings is presented, where asset prices are modeled with exponential time-inhomogeneous jump-diffusion processes.
Abstract: We present a numerical method for pricing Bermudan options on a large number of underlyings. The asset prices are modeled with exponential time-inhomogeneous jump-diffusion processes. We improve the least-squares Monte Carlo method proposed by Longstaff and Schwartz introducing an efficient variance reduction scheme. A control variable is obtained from a low-dimensional approximation of the multivariate Bermudan option. To this end, we adapt a model reduction method called proper orthogonal decomposition (POD), which is closely related to principal component analysis, to the case of Bermudan options. Our goal is to make use of the correlation structure of the assets in an optimal way. We compute the expectation of the control variable by either solving a low-dimensional partial integro-differential equation or by applying Fourier methods. The POD approximation can also be used as a candidate for the minimizing martingale in the dual pricing approach suggested by Rogers. We evaluate both approaches in numerical experiments.

12 citations


Journal ArticleDOI
TL;DR: In this article, a hybrid Monte Carlo-Optimal quantization method was used to approximate the conditional survival probabilities of a firm, given a structural model for its credit defaul, under partial information.
Abstract: In this paper we use a hybrid Monte Carlo-Optimal quantization method to approximate the conditional survival probabilities of a firm, given a structural model for its credit defaul, under partial information We consider the case when the firm's value is a non-observable stochastic process $(V_t)_{t \geq 0}$ and inverstors in the market have access to a process $(S_t)_{t \geq 0}$, whose value at each time t is related to $(V_s, s \leq t)$ We are interested in the computation of the conditional survival probabilities of the firm given the "investor information" As a application, we analyse the shape of the credit spread curve for zero coupon bonds in two examples

10 citations




Journal ArticleDOI
TL;DR: In this article, the authors approximate a Levy process by either truncating its small jumps or replacing them by a Brownian motion with the same variance, and derive the errors resulting from these approximations for some exotic options (Asian, barrier, lookback and American).
Abstract: We approximate a Levy process by either truncating its small jumps or replacing them by a Brownian motion with the same variance. Then we derive the errors resulting from these approximations for some exotic options (Asian, barrier, lookback and American). We also propose a simple method to evaluate these options using the approximated Levy process.


Journal ArticleDOI
TL;DR: In this article, three discretization schemes for the Heston stochastic volatility model are presented, two schemes for simulating the variance process and one scheme for Simulating the integrated variance process conditional on the initial and the end-point of the process.
Abstract: In this paper, we present three new discretization schemes for the Heston stochastic volatility model two schemes for simulating the variance process and one scheme for simulating the integrated variance process conditional on the initial and the end-point of the variance process. Instead of using a short timestepping approach to simulate the variance process and its integral, these new schemes evolve the Heston process accurately over long steps without the need to sample the intervening values. Hence, prices of financial derivatives can be evaluated rapidly using our new approaches.

Journal ArticleDOI
TL;DR: In this article, a parametric model of the stochastic discount factor is introduced based on empirical results in the literature, which is used in a sequential Monte Carlo algorithm for tracking the parameters of this and of an objective density over time.
Abstract: The focus of this work is on the problem of tracking parameters describing both the stochastic discount factor and the objective / real-world measure dynamically, with the aim of monitoring value at risk or other related diagnostics of interest. The methodology presented incorporates information from derivative prices as well as from the underlying instrument’s price over time in order to perform on-line parameter inference. We construct a parametric model of the stochastic discount factor which is introduced based on empirical results in the literature (Aı̈t-Sahalia and Lo, 2000; Jackwerth, 2000; Rosenberg and Engle, 2002, for example). This is used in a sequential Monte Carlo algorithm for tracking the parameters of this and of an objective density over time. Further, two new techniques for pricing European options in the framework are discussed. In applying this approach to price data, Variance Gamma and Normal Inverse Gaussian models of the underlying price process have been discussed. These are for illustrative purposes and other models could easily be also considered. Both models appear to track realistically; detailed results are presented for the Variance Gamma model. These cover the value at risk estimates, expected price change estimates and parameter estimates.


Journal ArticleDOI
TL;DR: In this paper, the authors present a nontrivial stochastic volatility model that parallels the notable Heston SV model in the sense of admitting exact path simulation as studied by Broadie and Kaya.
Abstract: Exact path simulation of the underlying state variable is of great practical importance in simulating prices of financial derivatives or their sensitivities when there are no analytical solutions for their pricing formulas. However, in general, the complex dependence structure inherent in most nontrivial stochastic volatility (SV) models makes exact simulation difficult. In this paper, we present a nontrivial SV model that parallels the notable Heston SV model in the sense of admitting exact path simulation as studied by Broadie and Kaya. The instantaneous volatility process of the proposed model is driven by a Gamma process. Extensions to the model including superposition of independent instantaneous volatility processes are studied. Numerical results show that the proposed model outperforms the Heston model and two other Levy driven SV models in terms of model fit to the real option data. The ability to exactly simulate some of the path-dependent derivative prices is emphasized. Moreover, this is the first instance where an infinite-activity volatility process can be applied exactly in such pricing contexts.



Journal ArticleDOI
TL;DR: In this paper, a bottom-up version of the multi-factor portfolio credit model was proposed by Longstaff and Rajan (2008), which is able to decompose the loss distribution into a series expansion.
Abstract: In this note we continue the study of the stress event model, a simple and intuitive dynamic model for credit risky portfolios, proposed by Duffie and Singleton (1999). The model is a bottom-up version of the multi-factor portfolio credit model proposed by Longstaff and Rajan (2008). By a novel identification of independence conditions, we are able to decompose the loss distribution into a series expansion which not only provides a clear picture of the characteristics of the loss distribution but also suggests a fast and accurate approximation for it. Our approach has three important features: (i) it is able to match the standard CDS index tranche prices and the underlying CDS spreads, (ii) the computational speed of the loss distribution is very fast, comparable to that of the Gaussian copula, (iii) the computational cost for additional factors is mild, allowing for more flexibility for calibrations and opening the possibility of studying multi-factor default dependence of a portfolio via a bottom-up approach. We demonstrate the tractability and efficiency of our approach by calibrating it to investment grade CDS index tranches.