scispace - formally typeset
Search or ask a question

Showing papers in "Annals of Applied Probability in 2006"


Journal ArticleDOI
TL;DR: In this article, the authors evaluate the cheapest superreplication price of a general (possibly path-dependent) European contingent claim in a context where the model is uncertain, and obtain a partial characterization result and a full characterization which extends Avellaneda, Levy and Paras results in the UVM case.
Abstract: The aim of this work is to evaluate the cheapest superreplication price of a general (possibly path-dependent) European contingent claim in a context where the model is uncertain. This setting is a generalization of the uncertain volatility model (UVM) introduced in by Avellaneda, Levy and Paras. The uncertainty is specified by a family of martingale probability measures which may not be dominated. We obtain a partial characterization result and a full characterization which extends Avellaneda, Levy and Paras results in the UVM case.

385 citations


Journal ArticleDOI
TL;DR: It is proved that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem.
Abstract: In this paper we study the ergodicity properties of some adaptive Markov chain Monte Carlo algorithms (MCMC) that have been recently proposed in the literature. We prove that under a set of verifiable conditions, ergodic averages calculated from the output of a so-called adaptive MCMC sampler converge to the required value and can even, under more stringent assumptions, satisfy a central limit theorem. We prove that the conditions required are satisfied for the independent Metropolis–Hastings algorithm and the random walk Metropolis algorithm with symmetric increments. Finally, we propose an application of these results to the case where the proposal distribution of the Metropolis–Hastings update is a mixture of distributions from a curved exponential family.

341 citations


Journal ArticleDOI
TL;DR: In this paper, the authors apply a combination of averaging and law of large number arguments to show that the slow component of the model can be approximated by a deterministic equation and to characterize the asymptotic distribution of the fast components.
Abstract: A reaction network is a chemical system involving multiple reactions and chemical species. Stochastic models of such networks treat the system as a continuous time Markov chain on the number of molecules of each species with reactions as possible transitions of the chain. In many cases of biological interest some of the chemical species in the network are present in much greater abundance than others and reaction rate constants can vary over several orders of magnitude. We consider approaches to approximation of such models that take the multiscale nature of the system into account. Our primary example is a model of a cell’s viral infection for which we apply a combination of averaging and law of large number arguments to show that the “slow” component of the model can be approximated by a deterministic equation and to characterize the asymptotic distribution of the “fast” components. The main goal is to illustrate techniques that can be used to reduce the dimensionality of much more complex models.

236 citations


Journal ArticleDOI
TL;DR: In this paper, the authors prove results on bounded solutions to backward stochastic equations driven by random measures and apply them to solve different optimization problems with exponential utility in models where the underlying filtration is noncontinuous.
Abstract: We prove results on bounded solutions to backward stochastic equations driven by random measures. Those bounded BSDE solutions are then applied to solve different stochastic optimization problems with exponential utility in models where the underlying filtration is noncontinuous. This includes results on portfolio optimization under an additional liability and on dynamic utility indifference valuation and partial hedging in incomplete financial markets which are exposed to risk from unpredictable events. In particular, we characterize the limiting behavior of the utility indifference hedging strategy and of the indifference value process for vanishing risk aversion.

221 citations


Journal ArticleDOI
TL;DR: In this paper, a time-space discretization scheme for quasi-linear PDEs is proposed based on the theory of fully coupled Forward-Backward SDEs, which provides an efficient probabilistic representation of this type of equations.
Abstract: We propose a time-space discretization scheme for quasi-linear PDEs. The algorithm relies on the theory of fully coupled Forward-Backward SDEs, which provides an efficient probabilistic representation of this type of equations. The derivated algorithm holds for strong solutions defined on any interval of arbitrary length. As a bypass product, we obtain a discretization procedure for the underlying FBSDE.

169 citations


Journal ArticleDOI
TL;DR: In this article, the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time is determined. And the corresponding formulas involve the moment resp. cumulant generating function and a Laplace- or Fourier-type representation of the contingent claim.
Abstract: We determine the variance-optimal hedge when the logarithm of the underlying price follows a process with stationary independent increments in discrete or continuous time. Although the general solution to this problem is known as backward recursion or backward stochastic differential equation, we show that for this class of processes the optimal endowment and strategy can be expressed more explicitly. The corresponding formulas involve the moment resp. cumulant generating function of the underlying process and a Laplace- or Fourier-type representation of the contingent claim. An example illustrates that our formulas are fast and easy to evaluate numerically.

156 citations


Journal ArticleDOI
TL;DR: In this article, a new fluctuation identity for a general Levy process giving a quintuple law describing the time of first passage, the last maximum before first passage and the overshoot, the undershoot, the undershoot, and the underhoot of the maximum.
Abstract: We obtain a new fluctuation identity for a general Levy process giving a quintuple law describing the time of first passage, the time of the last maximum before first passage, the overshoot, the undershoot and the undershoot of the last maximum. With the help of this identity, we revisit the results of Kluppelberg, Kyprianou and Maller [Ann. Appl. Probab. 14 (2004) 1766–1801] concerning asymptotic overshoot distribution of a particular class of Levy processes with semi-heavy tails and refine some of their main conclusions. In particular, we explain how different types of first passage contribute to the form of the asymptotic overshoot distribution established in the aforementioned paper. Applications in insurance mathematics are noted with emphasis on the case that the underlying Levy process is spectrally one sided.

152 citations


Journal ArticleDOI
TL;DR: This paper proves that the re-scaled stationary distribution of the GJN converges to the stationary Distribution of the RBM, thus validating a so-called “interchange-of-limits” for this class of networks.
Abstract: We consider a single class open queueing network, also known as a generalized Jackson network (GJN). A classical result in heavy-traffic theory asserts that the sequence of normalized queue length processes of the GJN converge weakly to a reflected Brownian motion (RBM) in the orthant, as the traffic intensity approaches unity. However, barring simple instances, it is still not known whether the stationary distribution of RBM provides a valid approximation for the steady-state of the original network. In this paper we resolve this open problem by proving that the re-scaled stationary distribution of the GJN converges to the stationary distribution of the RBM, thus validating a so-called “interchange-of-limits” for this class of networks. Our method of proof involves a combination of Lyapunov function techniques, strong approximations and tail probability bounds that yield tightness of the sequence of stationary distributions of the GJN.

123 citations


Journal ArticleDOI
TL;DR: In this article, the first-order expansion of marginal utility-based prices with respect to a small number of random endowments is shown to have important qualitative properties if and only if there is a risk-tolerance wealth process.
Abstract: In the general framework of a semimartingale financial model and a utility function U defined on the positive real line, we compute the first-order expansion of marginal utility-based prices with respect to a “small” number of random endowments. We show that this linear approximation has some important qualitative properties if and only if there is a risk-tolerance wealth process. In particular, they hold true in the following polar cases: 1. for any utility function U, if and only if the set of state price densities has a greatest element from the point of view of second-order stochastic dominance; 2. for any financial model, if and only if U is a power utility function (U is an exponential utility function if it is defined on the whole real line).

104 citations


Journal ArticleDOI
TL;DR: For two decades, the Colless index has been the most frequently used statistic for assessing the balance of phylogenctic trees as discussed by the authors, and this statistic is studied under the Yule and unifree seasons.
Abstract: For two decades, the Colless index has been the most frequently used statistic for assessing the balance of phylogenctic trees. In this article, this statistic is studied under the Yule and unif ...

99 citations


Journal ArticleDOI
TL;DR: The optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well, and the optimal efficiency obtainable is independent of the dimensionality of the update rule.
Abstract: In this paper we shall consider optimal scaling problems for high-dimensional Metropolis–Hastings algorithms where updates can be chosen to be lower dimensional than the target density itself. We find that the optimal scaling rule for the Metropolis algorithm, which tunes the overall algorithm acceptance rate to be 0.234, holds for the so-called Metropolis-within-Gibbs algorithm as well. Furthermore, the optimal efficiency obtainable is independent of the dimensionality of the update rule. This has important implications for the MCMC practitioner since high-dimensional updates are generally computationally more demanding, so that lower-dimensional updates are therefore to be preferred. Similar results with rather different conclusions are given for so-called Langevin updates. In this case, it is found that high-dimensional updates are frequently most efficient, even taking into account computing costs.

Journal ArticleDOI
TL;DR: In this paper, the authors consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent, and present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrent.
Abstract: A ϕ-irreducible and aperiodic Markov chain with stationary probability distribution will converge to its stationary distribution from almost all starting points. The property of Harris recurrence allows us to replace “almost all” by “all,” which is potentially important when running Markov chain Monte Carlo algorithms. Full-dimensional Metropolis–Hastings algorithms are known to be Harris recurrent. In this paper, we consider conditions under which Metropolis-within-Gibbs and trans-dimensional Markov chains are or are not Harris recurrent. We present a simple but natural two-dimensional counter-example showing how Harris recurrence can fail, and also a variety of positive results which guarantee Harris recurrence. We also present some open problems. We close with a discussion of the practical implications for MCMC algorithms.

Journal ArticleDOI
TL;DR: In this paper, the nonsingularity condition was introduced for learning a Markov model nonsingular, i.e., all transition matrices have determinants bounded away from 0 and 1.
Abstract: In this paper we study the problem of learning phylogenies and hidden Markov models. We call a Markov model nonsingular if all transition matrices have determinants bounded away from 0 (and 1). We highlight the role of the nonsingularity condition for the learning problem. Learning hidden Markov models without the nonsingularity condition is at least as hard as learning parity with noise, a well-known learning problem conjectured to be computationally hard. On the other hand, we give a polynomial-time algorithm for learning nonsingular phylogenies and hidden Markov models.

Journal ArticleDOI
TL;DR: The objective is to design an alarm time which is adapted to the history of the arrival process and detects the disorder time as soon as possible, and assumes in this paper that the new arrival rate after the disorder is a random variable.
Abstract: We study the quickest detection problem of a sudden change in the arrival rate of a Poisson process from a known value to an unknown and unobservable value at an unknown and unobservable disorder time. Our objective is to design an alarm time which is adapted to the history of the arrival process and detects the disorder time as soon as possible. In previous solvable versions of the Poisson disorder problem, the arrival rate after the disorder has been assumed a known constant. In reality, however, we may at most have some prior information about the likely values of the new arrival rate before the disorder actually happens, and insufficient estimates of the new rate after the disorder happens. Consequently, we assume in this paper that the new arrival rate after the disorder is a random variable. The detection problem is shown to admit a finite-dimensional Markovian sufficient statistic, if the new rate has a discrete distribution with finitely many atoms. Furthermore, the detection problem is cast as a discounted optimal stopping problem with running cost for a finite-dimensional piecewise-deterministic Markov process. This optimal stopping problem is studied in detail in the special case where the new arrival rate has Bernoulli distribution. This is a nontrivial optimal stopping problem for a two-dimensional piecewise-deterministic Markov process driven by the same point process. Using a suitable single-jump operator, we solve it fully, describe the analytic properties of the value function and the stopping region, and present methods for their numerical calculation. We provide a concrete example where the value function does not satisfy the smooth-fit principle on a proper subset of the connected, continuously differentiable optimal stopping boundary, whereas it does on the complement of this set.

Journal ArticleDOI
TL;DR: In this paper, a family of continuous-time generalized autoregressive conditionally heteroscedastic processes, generalizing the COGARCH(1,1) process of Kluppelberg, Lindner and Maller is introduced and studied.
Abstract: A family of continuous-time generalized autoregressive conditionally heteroscedastic processes, generalizing the COGARCH(1,1) process of Kluppelberg, Lindner and Maller [J. Appl. Probab. 41 (2004) 601–622], is introduced and studied. The resulting COGARCH(p,q) processes, q≥p≥1, exhibit many of the characteristic features of observed financial time series, while their corresponding volatility and squared increment processes display a broader range of autocorrelation structures than those of the COGARCH(1,1) process. We establish sufficient conditions for the existence of a strictly stationary nonnegative solution of the equations for the volatility process and, under conditions which ensure the finiteness of the required moments, determine the autocorrelation functions of both the volatility and the squared increment processes. The volatility process is found to have the autocorrelation function of a continuous-time autoregressive moving average process.

Journal ArticleDOI
TL;DR: For a genetic locus carrying a strongly beneficial allele which has just fixed in a large population, the ancestry at a linked neutral locus is modeled by a structured coalescent in a random background as mentioned in this paper.
Abstract: For a genetic locus carrying a strongly beneficial allele which has just fixed in a large population, we study the ancestry at a linked neutral locus. During this “selective sweep” the linkage between the two loci is broken up by recombination and the ancestry at the neutral locus is modeled by a structured coalescent in a random background. For large selection coefficients α and under an appropriate scaling of the recombination rate, we derive a sampling formula with an order of accuracy of $\mathcal{O}((\log \alpha)^{-2})$ in probability. In particular we see that, with this order of accuracy, in a sample of fixed size there are at most two nonsingleton families of individuals which are identical by descent at the neutral locus from the beginning of the sweep. This refines a formula going back to the work of Maynard Smith and Haigh, and complements recent work of Schweinsberg and Durrett on selective sweeps in the Moran model.

Journal ArticleDOI
TL;DR: In this paper, the authors provide a scheme for simulating diffusion processes evolving in one-dimensional discontinuous media, which does not rely on smoothing the coefficients that appear in the infinitesimal generator of the diffusion processes, but uses instead an exact description of the behavior of their trajectories when they reach the points of discontinuity.
Abstract: The aim of this article is to provide a scheme for simulating diffusion processes evolving in one-dimensional discontinuous media. This scheme does not rely on smoothing the coefficients that appear in the infinitesimal generator of the diffusion processes, but uses instead an exact description of the behavior of their trajectories when they reach the points of discontinuity. This description is supplied with the local comparison of the trajectories of the diffusion processes with those of a skew Brownian motion.

Journal ArticleDOI
TL;DR: In this article, the authors studied a continuous-time market where an agent, having specified an investment horizon and a targeted terminal mean return, seeks to minimize the variance of the return.
Abstract: This paper studies a continuous-time market where an agent, having specified an investment horizon and a targeted terminal mean return, seeks to minimize the variance of the return. The optimal portfolio of such a problem is called mean-variance efficient a la Markowitz. It is shown that, when the market coefficients are deterministic functions of time, a mean-variance efficient portfolio realizes the (discounted) targeted return on or before the terminal date with a probability greater than 0.8072. This number is universal irrespective of the market parameters, the targeted return and the length of the investment horizon.

Journal ArticleDOI
TL;DR: In this paper, the authors give a necessary and sufficient condition for a sequence of birth and death chains to converge abruptly to stationarity, that is, to present a cut-off.
Abstract: This paper gives a necessary and sufficient condition for a sequence of birth and death chains to converge abruptly to stationarity, that is, to present a cut-off. The condition involves the notions of spectral gap and mixing time. Y. Peres has observed that for many families of Markov chains, there is a cut-off if and only if the product of spectral gap and mixing time tends to infinity. We establish this for arbitrary birth and death chains in continuous time when the convergence is measured in separation and the chains all start at 0.

Journal ArticleDOI
TL;DR: In this article, the authors consider a polymer, with monomer locations modeled by the trajectory of a Markov chain, in the presence of a potential that interacts with the polymer when it visits a particular site 0.
Abstract: We consider a polymer, with monomer locations modeled by the trajectory of a Markov chain, in the presence of a potential that interacts with the polymer when it visits a particular site 0. Disorder is introduced by, for example, having the interaction vary from one monomer to another, as a constant u plus i.i.d. mean-0 randomness. There is a critical value of u above which the polymer is pinned, placing a positive fraction of its monomers at 0 with high probability. This critical point may differ for the quenched, annealed and deterministic cases. We show that self-averaging occurs, meaning that the quenched free energy and critical point are nonrandom, off a null set. We evaluate the critical point for a deterministic interaction (u without added randomness) and establish our main result that the critical point in the quenched case is strictly smaller. We show that, for every fixed u∈ℝ, pinning occurs at sufficiently low temperatures. If the excursion length distribution has polynomial tails and the interaction does not have a finite exponential moment, then pinning occurs for all u∈ℝ at arbitrary temperature. Our results apply to other mathematically similar situations as well, such as a directed polymer that interacts with a random potential located in a one-dimensional defect, or an interface in two dimensions interacting with a random potential along a wall.

Journal ArticleDOI
TL;DR: In this article, the authors established the weak convergence rate of nonlinear two-time-scale stochastic approximation algorithms and introduced the averaging principle in the context of two time-scale approximation algorithms.
Abstract: The first aim of this paper is to establish the weak convergence rate of nonlinear two-time-scale stochastic approximation algorithms. Its second aim is to introduce the averaging principle in the context of two-time-scale stochastic approximation algorithms. We first define the notion of asymptotic efficiency in this framework, then introduce the averaged two-time-scale stochastic approximation algorithm, and finally establish its weak convergence rate. We show, in particular, that both components of the averaged two-time-scale stochastic approximation algorithm simultaneously converge at the optimal rate $\sqrt{n}$.

Journal ArticleDOI
TL;DR: In this paper, it was shown that a perpetual optimal stopping game always has a value and that there exists an optimal stopping time for the seller, but not necessarily for the buyer.
Abstract: We show, under weaker assumptions than in the previous literature, that a perpetual optimal stopping game always has a value. We also show that there exists an optimal stopping time for the seller, but not necessarily for the buyer. Moreover, conditions are provided under which the existence of an optimal stopping time for the buyer is guaranteed. The results are illustrated explicitly in two examples.

Journal ArticleDOI
TL;DR: In this paper, the authors studied the differentiability of the value functions of the primal and dual optimization problems that appear in the setting of expected utility maximization in incomplete markets, and showed that the key conditions for the results to hold true are that the relative risk aversion coefficient of the utility function is uniformly bounded away from zero and infinity, and that the prices of traded securities are sigma-bounded under the numeraire given by the optimal wealth process.
Abstract: We study the two-times differentiability of the value functions of the primal and dual optimization problems that appear in the setting of expected utility maximization in incomplete markets. We also study the differentiability of the solutions to these problems with respect to their initial values. We show that the key conditions for the results to hold true are that the relative risk aversion coefficient of the utility function is uniformly bounded away from zero and infinity, and that the prices of traded securities are sigma-bounded under the numeraire given by the optimal wealth process.

Journal ArticleDOI
TL;DR: In this paper, it was shown that the vertical profile of embedded trees converges to a (random) Holder continuous density, at least for some such trees, at a fixed point.
Abstract: It has been known for a few years that the occupation measure of several models of embedded trees converges, after a suitable normalization, to the random measure called ISE (integrated SuperBrownian excursion). Here, we prove a local version of this result: ISE has a (random) Holder continuous density, and the vertical profile of embedded trees converges to this density, at least for some such trees. As a consequence, we derive a formula for the distribution of the density of ISE at a given point. This follows from earlier results by Bousquet-Melou on convergence of the vertical profile at a fixed point. We also provide a recurrence relation defining the moments of the (random) moments of ISE.

Journal ArticleDOI
TL;DR: This paper presents the first theoretical work analyzing the rate of convergence of several Markov chains widely used in phylogenetic inference, and proves that many of the popular Markov Chains take exponentially long to reach their stationary distribution.
Abstract: Markov chain Monte Carlo algorithms play a key role in the Bayesian approach to phylogenetic inference. In this paper, we present the first theoretical work analyzing the rate of convergence of several Markov chains widely used in phylogenetic inference. We analyze simple, realistic examples where these Markov chains fail to converge quickly. In particular, the data studied are generated from a pair of trees, under a standard evolutionary model. We prove that many of the popular Markov chains take exponentially long to reach their stationary distribution. Our construction is pertinent since it is well known that phylogenetic trees for genes may differ within a single organism. Our results shed a cautionary light on phylogenetic analysis using Bayesian inference and highlight future directions for potential theoretical work.

Journal ArticleDOI
TL;DR: In this paper, a nonhomogeneous Generalized Polya Urn (GPU) model is proposed to yield limiting treatment proportions according to any desired allocation target, and the applicability of the model is illustrated with a number of examples.
Abstract: The Generalized Polya Urn (GPU) is a popular urn model which is widely used in many disciplines. In particular, it is extensively used in treatment allocation schemes in clinical trials. In this paper, we propose a sequential estimation-adjusted urn model (a nonhomogeneous GPU) which has a wide spectrum of applications. Because the proposed urn model depends on sequential estimations of unknown parameters, the derivation of asymptotic properties is mathematically intricate and the corresponding results are unavailable in the literature. We overcome these hurdles and establish the strong consistency and asymptotic normality for both the patient allocation and the estimators of unknown parameters, under some widely satisfied conditions. These properties are important for statistical inferences and they are also useful for the understanding of the urn limiting process. A superior feature of our proposed model is its capability to yield limiting treatment proportions according to any desired allocation target. The applicability of our model is illustrated with a number of examples.

Journal ArticleDOI
TL;DR: In this article, the authors considered the scheduling control problem for a family of unitary networks under heavy traffic, with general interarrival and service times, probabilistic routing and infinite horizon discounted linear holding cost.
Abstract: We consider the scheduling control problem for a family of unitary networks under heavy traffic, with general interarrival and service times, probabilistic routing and infinite horizon discounted linear holding cost. A natural nonanticipativity condition for admissibility of control policies is introduced. The condition is seen to hold for a broad class of problems. Using this formulation of admissible controls and a time-transformation technique, we establish that the infimum of the cost for the network control problem over all admissible sequencing control policies is asymptotically bounded below by the value function of an associated diffusion control problem (the Brownian control problem). This result provides a useful bound on the best achievable performance for any admissible control policy for a wide class of networks.

Journal ArticleDOI
TL;DR: A new class of self-similar symmetric $\alpha$-stable processes with stationary increments arising as a large time scale limit in a situation where many users are earning random rewards or incurring random costs is described.
Abstract: We describe a new class of self-similar symmetric α-stable processes with stationary increments arising as a large time scale limit in a situation where many users are earning random rewards or incurring random costs. The resulting models are different from the ones studied earlier both in their memory properties and smoothness of the sample paths.

Journal ArticleDOI
TL;DR: In this paper, a central limit theorem for the number of vertices of convex polytopes induced by stationary Poisson hyperplane processes in ℝd was derived.
Abstract: We derive a central limit theorem for the number of vertices of convex polytopes induced by stationary Poisson hyperplane processes in ℝd. This result generalizes an earlier one proved by Paroux [Adv. in Appl. Probab. 30 (1998) 640–656] for intersection points of motion-invariant Poisson line processes in ℝ2. Our proof is based on Hoeffding’s decomposition of U-statistics which seems to be more efficient and adequate to tackle the higher-dimensional case than the “method of moments” used in [Adv. in Appl. Probab. 30 (1998) 640–656] to treat the case d=2. Moreover, we extend our central limit theorem in several directions. First we consider k-flat processes induced by Poisson hyperplane processes in ℝd for 0≤k≤d−1. Second we derive (asymptotic) confidence intervals for the intensities of these k-flat processes and, third, we prove multivariate central limit theorems for the d-dimensional joint vectors of numbers of k-flats and their k-volumes, respectively, in an increasing spherical region.

Journal ArticleDOI
TL;DR: In this paper, the problem of nonlinear filtering of the coefficients in asset price models with stochastic volatility has been studied and a closed form optimal recursive Bayesian filter is derived based on the observations of (τk, log nτk)k≥1.
Abstract: This paper is concerned with nonlinear filtering of the coefficients in asset price models with stochastic volatility. More specifically, we assume that the asset price process S=(St)t≥0 is given by dSt=m(θt)St dt+v(θt)St dBt, where B=(Bt)t≥0 is a Brownian motion, v is a positive function and θ=(θt)t≥0 is a cadlag strong Markov process. The random process θ is unobservable. We assume also that the asset price St is observed only at random times 0<τ1<τ2< ... . This is an appropriate assumption when modeling high frequency financial data (e.g., tick-by-tick stock prices). In the above setting the problem of estimation of θ can be approached as a special nonlinear filtering problem with measurements generated by a multivariate point process (τk, log Sτk). While quite natural, this problem does not fit into the “standard” diffusion or simple point process filtering frameworks and requires more technical tools. We derive a closed form optimal recursive Bayesian filter for θt, based on the observations of (τk, log Sτk)k≥1. It turns out that the filter is given by a recursive system that involves only deterministic Kolmogorov-type equations, which should make the numerical implementation relatively easy.