scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Actuarial Journal in 2014"


Journal ArticleDOI
TL;DR: The micro-level model outperforms the results obtained with traditional loss reserving methods for aggregate data and is calibrated to historical data and used to project the future development of open claims.
Abstract: The vast literature on stochastic loss reserving concentrates on data aggregated in run-off triangles. However, a triangle is a summary of an underlying data-set with the development of individual claims. We refer to this data-set as ‘micro-level’ data. Using the framework of Position Dependent Marked Poisson Processes) and statistical tools for recurrent events, a data-set is analyzed with liability claims from a European insurance company. We use detailed information of the time of occurrence of the claim, the delay between occurrence and reporting to the insurance company, the occurrences of payments and their sizes, and the final settlement. Our specifications are (semi)parametric and our approach is likelihood based. We calibrate our model to historical data and use it to project the future development of open claims. An out-of-sample prediction exercise shows that we obtain detailed and valuable reserve calculations. For the case study developed in this paper, the micro-level model outperforms the r...

122 citations


Journal ArticleDOI
TL;DR: This paper shows that the stop-loss reinsurance is an optimal contract under law-invariant convex risk measures via a new simple geometric argument and highlights that the corresponding optimal reinsurance still provides the protection coverage against extreme loss irrespective to the potential increment of its probability of occurrence.
Abstract: In recent years, general risk measures play an important role in risk management in both finance and insurance industry. As a consequence, there is an increasing number of research on optimal reinsurance decision problems using risk measures beyond the classical expected utility framework. In this paper, we first show that the stop-loss reinsurance is an optimal contract under law-invariant convex risk measures via a new simple geometric argument. A similar approach is then used to tackle the same optimal reinsurance problem under Value at Risk and Conditional Tail Expectation; it is interesting to note that, instead of stop-loss reinsurances, insurance layers serve as the optimal solution. These two results highlight that law-invariant convex risk measure is better and more robust, in the sense that the corresponding optimal reinsurance still provides the protection coverage against extreme loss irrespective to the potential increment of its probability of occurrence, to expected larger claim than Value ...

83 citations


Journal ArticleDOI
TL;DR: New composite models based on the lognormal distribution are proposed and at least one of the newly proposed models is shown to give a better fit to the Danish fire insurance data.
Abstract: In recent years, several composite models based on the lognormal distribution have been developed for the Danish fire insurance data. In this note, we propose new composite models based on the lognormal distribution. At least one of the newly proposed models is shown to give a better fit to the Danish fire insurance data.

76 citations


Journal ArticleDOI
Adam Lenart1
TL;DR: In this article, the Gompertz distribution is used to describe the distribution of adult deaths and exact formulas can be derived for its moment-generating function and central moments based on the exact central moments.
Abstract: The Gompertz distribution is widely used to describe the distribution of adult deaths. Previous works concentrated on formulating approximate relationships to characterise it. However, using the generalised integro-exponential function, exact formulas can be derived for its moment-generating function and central moments. Based on the exact central moments, higher accuracy approximations can be defined for them. In demographic or actuarial applications, maximum likelihood estimation is often used to determine the parameters of the Gompertz distribution. By solving the maximum likelihood estimates analytically, the dimension of the optimisation problem can be reduced to one both in the case of discrete and continuous data. Monte Carlo experiments show that by ML estimation, higher accuracy estimates can be acquired than by the method of moments.

55 citations


Journal ArticleDOI
TL;DR: A dynamic modelling approach for predicting individual customers’ risk of leaving an insurance company using a logistic longitudinal regression model that incorporates time-dynamic explanatory variables and interactions is fitted to the data.
Abstract: Within a company's customer relationship management strategy, finding the customers most likely to leave is a central aspect. We present a dynamic modelling approach for predicting individual customers’ risk of leaving an insurance company. A logistic longitudinal regression model that incorporates time-dynamic explanatory variables and interactions is fitted to the data. As an intermediate step in the modelling procedure, we apply generalised additive models to identify non-linear relationships between the logit and the explanatory variables. Both out-of-sample and out-of-time prediction indicate that the model performs well in terms of identifying customers likely to leave the company each month. Our approach is general and may be applied to other industries as well.

42 citations


Journal ArticleDOI
TL;DR: In this article, the authors investigate the optimal form of reinsurance from the perspective of an insurer when he decides to cede part of the loss to two reinsurers, where the first reinsurer calculates the premium by expected value principle while the premium principle adopted by the second reinsurer satisfies three axioms: distribution invariance, risk loading, and preserving stop-loss order.
Abstract: In this paper, we investigate the optimal form of reinsurance from the perspective of an insurer when he decides to cede part of the loss to two reinsurers, where the first reinsurer calculates the premium by expected value principle while the premium principle adopted by the second reinsurer satisfies three axioms: distribution invariance, risk loading, and preserving stop-loss order. In order to exclude the moral hazard, a typical reinsurance treaty assumes that both the insurer and reinsurers are obligated to pay more for the larger loss. Under the criterion of minimizing value at risk (VaR) or conditional value at risk (CVaR) of the insurer's total risk exposure, we show that an optimal reinsurance policy is to cede two adjacent layers, where the upper layer is distributed to the first reinsurer. To further illustrate the applicability of our results, we derive explicitly the optimal layer reinsurance by assuming a generalized Wang's premium principle to the second reinsurer.

38 citations


Journal ArticleDOI
TL;DR: In this article, the expected discounted penalty function (EDPF) is used for a general class of Levy risk processes and the expressions for the EDPF are presented for a wide range of models.
Abstract: Ever since the first introduction of the expected discounted penalty function (EDPF), it has been widely acknowledged that it contains information that is relevant from a risk management perspective. Expressions for the EDPF are now available for a wide range of models, in particular for a general class of Levy risk processes. Yet, in order to capitalize on this potential for applications, these expressions must be computationally tractable enough as to allow for the evaluation of associated risk measures such as Value at Risk (VaR) or Conditional Value at Risk (CVaR). Most of the models studied so far offer few interesting examples for which computation of the associated EDPF can be carried out to the last instances where evaluation of risk measures is possible. Another drawback of existing examples is that the expressions are available for an infinite-time horizon EDPF only. Yet, realistic applications would require the computation of an EDPF over a finite-time horizon. In this paper we address these tw...

38 citations


Journal ArticleDOI
TL;DR: In this article, an extension to the renewal or Sparre Andersen risk process by introducing a dependence structure between the claim sizes and the interclaim times through a Farlie-Gumbel-Morgenstern copula was considered.
Abstract: In this article, we consider an extension to the renewal or Sparre Andersen risk process by introducing a dependence structure between the claim sizes and the interclaim times through a Farlie–Gumbel–Morgenstern copula proposed by Cossette et al. (2010) for the classical compound Poisson risk model. We consider that the inter-arrival times follow the Erlang(n) distribution. By studying the roots of the generalised Lundberg equation, the Laplace transform (LT) of the expected discounted penalty function is derived and a detailed analysis of the Gerber–Shiu function is given when the initial surplus is zero. It is proved that this function satisfies a defective renewal equation and its solution is given through the compound geometric tail representation of the LT of the time to ruin. Explicit expressions for the discounted joint and marginal distribution functions of the surplus prior to the time of ruin and the deficit at the time of ruin are derived. Finally, for exponential claim sizes explicit expressions and numerical examples for the ruin probability and the LT of the time to ruin are given.

31 citations


Journal ArticleDOI
TL;DR: In this article, a nonparametric estimator for ruin probability in the classical risk model with unknown claim size distribution is presented, where the estimator is constructed by Fourier inversion and kernel density estimation method.
Abstract: In this paper, we present a nonparametric estimator for ruin probability in the classical risk model with unknown claim size distribution. We construct the estimator by Fourier inversion and kernel density estimation method. Under some conditions imposed on the kernel, bandwidth and claim size density, we present some large sample properties of the estimator. Some simulation studies are also given to show the finite sample performance of the estimator.

30 citations


Journal ArticleDOI
TL;DR: In this article, the authors explored the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context and proposed an elliptical copula to accommodate the intertemporal nature of the jittering claim counts and the unobservable subject-specific heterogeneity on the frequency of claims.
Abstract: Modeling insurance claim counts is a critical component in the ratemaking process for property and casualty insurance. This article explores the usefulness of copulas to model the number of insurance claims for an individual policyholder within a longitudinal context. To address the limitations of copulas commonly attributed to multivariate discrete data, we adopt a ‘jittering’ method to the claim counts which has the effect of continuitizing the data. Elliptical copulas are proposed to accommodate the intertemporal nature of the ‘jittered’ claim counts and the unobservable subject-specific heterogeneity on the frequency of claims. Observable subject-specific effects are accounted in the model by using available covariate information through a regression model. The predictive distribution together with the corresponding credibility of claim frequency can be derived from the model for ratemaking and risk classification purposes. For empirical illustration, we analyze an unbalanced longitudinal dataset of c...

28 citations


Journal ArticleDOI
TL;DR: In this article, the authors apply trend models from non-life claims reserving to age-period-cohort mortality trends to estimate mortality improvement and quantifying its uncertainty.
Abstract: Longevity risk arising from uncertain mortality improvement is one of the major risks facing annuity providers and pension funds. In this article, we show how applying trend models from non-life claims reserving to age-period-cohort mortality trends provides new insight in estimating mortality improvement and quantifying its uncertainty. Age, period and cohort trends are modelled with distinct effects for each age, calendar year and birth year in a generalised linear models framework. The effects are distinct in the sense that they are not conjoined with age coefficients, borrowing from regression terminology, we denote them as main effects. Mortality models in this framework for age-period, age-cohort and age-period-cohort effects are assessed using national population mortality data from Norway and Australia to show the relative significance of cohort effects as compared to period effects. Results are compared with the traditional Lee–Carter model. The bilinear period effect in the Lee–Carter model is s...

Journal ArticleDOI
TL;DR: In this article, the authors consider an insurance company whose surplus is represented by the classical Cramer-Lundberg process, and the objective is to find an optimal investment policy that minimizes the probability of ruin.
Abstract: We consider an insurance company whose surplus is represented by the classical Cramer-Lundberg process. The company can invest its surplus in a risk-free asset and in a risky asset, governed by the Black-Scholes equation. There is a constraint that the insurance company can only invest in the risky asset at a limited leveraging level; more precisely, when purchasing, the ratio of the investment amount in the risky asset to the surplus level is no more than a; and when short-selling, the proportion of the proceeds from the short-selling to the surplus level is no more than b. The objective is to find an optimal investment policy that minimizes the probability of ruin. The minimal ruin probability as a function of the initial surplus is characterized by a classical solution to the corresponding Hamilton-Jacobi-Bellman (HJB) equation. We study the optimal control policy and its properties. The interrelation between the parameters of the model plays a crucial role in the qualitative behavior of the optimal po...

Journal ArticleDOI
TL;DR: In this article, a new distribution capable of exhibiting all the possible modes of accelerating and decelerating mortality in humans was used to conduct a systematic investigation of late-life mortality, finding that the onset of mortality deceleration is progressively delayed in Western societies but that there is evidence of mortality plateauing at earlier ages.
Abstract: Using a new distribution capable of exhibiting all the possible modes of accelerating and decelerating mortality, we conduct a systematic investigation of late-life mortality in humans. We check the insensitivity of the distribution to age cutoffs in the data relative to the logistic mortality model and propose a method to forecast evolution in the characteristic deceleration ages of the distribution. A number of data sets have been explored, with a particular emphasis on those originating from Scandinavia. Although those from Australia, Canada, and the USA are compatible with Gompertzian mortality, those from the other countries examined are not. We find in particular that the onset of mortality deceleration is being progressively delayed in Western societies but that there is evidence of mortality plateauing at earlier ages.

Journal ArticleDOI
TL;DR: In this article, the authors investigated the number of phases required to achieve a pre-specified accuracy for estimating the ruin probability in the classical risk model, and provided error bounds for different types of phases.
Abstract: Numerical evaluation of ruin probabilities in the classical risk model is an important problem. If claim sizes are heavy-tailed, then such evaluations are challenging. To overcome this, an attractive way is to approximate the claim sizes with a phase-type distribution. What is not clear though is how many phases are enough in order to achieve a specific accuracy in the approximation of the ruin probability. The goals of this paper are to investigate the number of phases required so that we can achieve a pre-specified accuracy for the ruin probability and to provide error bounds. Also, in the special case of a completely monotone claim size distribution we develop an algorithm to estimate the ruin probability by approximating the excess claim size distribution with a hyperexponential one. Finally, we compare our approximation with the heavy traffic and heavy tail approximations.

Journal ArticleDOI
TL;DR: In this paper, a semiparametric estimator of the ruin probability of an insurance company is proposed for the classical Poisson risk model when the claim size distribution and the Poisson arrival rate are unknown.
Abstract: The ruin probability of an insurance company is a central topic in risk theory. We consider the classical Poisson risk model when the claim size distribution and the Poisson arrival rate are unknown. Given a sample of inter-arrival times and corresponding claims, we propose a semiparametric estimator of the ruin probability. We establish properties of strong consistency and asymptotic normality of the estimator and study bootstrap confidence bands. Further, we present a simulation example in order to investigate the finite sample properties of the proposed estimator.

Journal ArticleDOI
TL;DR: In this article, the authors present a new model for the pricing of catastrophe excess of loss cover (Cat XL), which is based on a compound Poisson process of catastrophe costs and evaluate the distribution of the cost of each catastrophe.
Abstract: What is the catastrophe risk a life insurance company faces? What is the correct price of a catastrophe cover? During a review of the current standard model, due to Strickler, we found that this model has some serious shortcomings. We therefore present a new model for the pricing of catastrophe excess of loss cover (Cat XL). The new model for annual claim cost C is based on a compound Poisson process of catastrophe costs. To evaluate the distribution of the cost of each catastrophe, we use the Peaks Over Threshold model for the total number of lost lives in each catastrophe and the beta binomial model for the proportion of these corresponding to customers of the insurance company. To be able to estimate the parameters of the model, international and Swedish data were collected and compiled, listing accidents claiming at least twenty and four lives, respectively. Fitting the new model to data, we find the fit to be good. Finally we give the price of a Cat XL contract and perform a sensitivity analysis of h...

Journal ArticleDOI
TL;DR: In this article, the authors examined how the coefficient of variation varies with the number of members and found that even with a few hundred members in the scheme, idiosyncratic mortality risk may still be significant.
Abstract: A risk of small defined-benefit pension schemes is that there are too few members to eliminate idiosyncratic mortality risk, that is, there are too few members to effectively pool mortality risk. This means that when there are few members in the scheme, there is an increased risk of the liability value deviating significantly from the expected liability value, as compared to a large scheme. We quantify this risk through examining the coefficient of variation of a scheme's liability value relative to its expected value. We examine how the coefficient of variation varies with the number of members and find that, even with a few hundred members in the scheme, idiosyncratic mortality risk may still be significant. Next we quantify the amount of the mortality risk concentrated in the executive section of the scheme, where the executives receive a benefit that is higher than the non-executive benefit. We use the Euler capital allocation principle to allocate the total standard deviation of the liability value b...

Journal ArticleDOI
TL;DR: In this article, the effects of mandatory unisex tariffs in insurance contracts, such as those required by a recent ruling of the European Court of Justice, on equilibrium insurance premia and equilibrium welfare are analyzed.
Abstract: We analyze the effects of mandatory unisex tariffs in insurance contracts, such as those required by a recent ruling of the European Court of Justice, on equilibrium insurance premia and equilibrium welfare. In a unified framework, we provide a quantitative analysis of the associated insurance market equilibria in both monopolistic and competitive insurance markets. We investigate the welfare loss caused by regulatory adverse selection and show that unisex tariffs may cause market distortions that significantly reduce overall social welfare.

Journal ArticleDOI
TL;DR: In this article, the aging process is represented as the passage through a number of phases of decreasing vitality and individuals additionally pass through several stages that represent duration of disability, and explicit and easily calculable expressions are obtained for relevant probabilities and actuarial present values.
Abstract: This paper explores the use of phase-type models in actuarial calculations for disability insurance. We demonstrate that the changes in status of disability insureds can be appropriately captured by a phase-type model. Our model represents the aging process as the passage through a number of phases of decreasing vitality. When disabled, individuals additionally pass through several stages that represent duration of disability. Recovery and mortality rates from the earlier stages are greater than those in later stages. Using such a model, explicit and easily calculable expressions are obtained for relevant probabilities and actuarial present values. This facilitates the calculation of premiums and reserves.

Journal ArticleDOI
Abstract: In this paper, we consider a fairly large class of dependent Sparre Andersen risk models where the claim sizes belong to the class of Coxian distributions. We analyze the Gerber–Shiu discounted penalty function when the penalty function depends on the deficit at ruin. We show that the system of equations needed to solve for this quantity is surprisingly simple. Various applications of this result are also considered.

Journal ArticleDOI
TL;DR: The authors showed that the complete monotonicity is preserved under mixed geometric compounding, and hence show that the ruin probability, the Laplace transform of the ruin time, and the density of the tail of the joint distribution of ruin and the deficit at ruin in the Sparre Andersen model are completely monotone.
Abstract: We prove that the complete monotonicity is preserved under mixed geometric compounding, and hence show that the ruin probability, the Laplace transform of the ruin time, and the density of the tail of the joint distribution of ruin and the deficit at ruin in the Sparre Andersen model are completely monotone if the claim size distribution has a completely monotone density.

Journal ArticleDOI
TL;DR: Brazauskas and Kleefeld as mentioned in this paper discussed the use of folded normal and folded t distributions for insurance loss data and apply them to a set of Norwegian fire claims data from Beirlant et al.
Abstract: Folded distributions have been around for some years. The folded normal distribution developed by Leone et al. (1961) and the folded t distribution developed by Psarakis and Panaretoes (1990) were designed for use in situations in which measurements (generally differences or deviations) were recorded without their algebraic signs. The folded distribution models have a direct and meaningful physical interpretation when they are applied in these situations. Geometrically speaking, the negative side of the distribution is being folded onto the positive side. If insurance losses in a certain situation might be either positive or negative and the losses were recorded without their signs, then a folded model would obviously be appropriate and could sensibly be applied. However, this situation rarely (if ever) occurs in an insurance context. This does not mean that folded models cannot be applied to insurance data sets. But it does mean that the standard interpretation supporting the use of folded models no longer applies. Brazauskas and Kleefeld (2011) discuss the use of folded normal and folded t distributions for insurance loss data and apply them to a set of Norwegian fire claims data from Beirlant et al. (1996). This data set corresponds to the total damage done by n 827 fires in Norway for the year 1988, which exceed a priority of 500,000 Norwegian krones. Let x denote the claim values, recorded in thousands of krones. Brazauskas and Kleefeld (2011) motivate the use of folded distributions in this example with the observation that the histogram of the transformed data values y ln(x/500) approximately mimics the shape of a folded normal distribution’s probability density function. A folded t distribution is similar to a folded normal but with a thicker right hand tail. Brazauskas and Kleefeld (2011) fit a folded t7 distribution to the (rescaled and log transformed) data y and observe that this distribution provides a good fit. They also fit the competing exponential, gamma, generalized Pareto distribution (GPD), and Weibull distributions to the transformed data. This is meant to demonstrate that the proposed folded model provides a better fit to the Norway claims data set than these other competing models do. Based on the x goodness-of-fit test, they conclude that the folded t7 fit should be accepted while the other fits should be rejected (i.e. at all ‘typical’ significance levels a 0.01, 0.05, and 0.10). Among the competitor models, the GPD provided the best fit. Some fit statistics supporting the superiority of the fit of the folded

Journal ArticleDOI
TL;DR: In this paper, the authors consider a risk reserve process where the arrivals (either claims or capital injections) occur according to a Markovian point process and derive a generalised Gerber-Shiu measure that is the joint distribution of the time to ruin, surplus immediately before ruin, the deficit at ruin, and the minimal risk reserve before ruin.
Abstract: In this paper we consider a risk reserve process where the arrivals (either claims or capital injections) occur according to a Markovian point process. Both claim and capital injection sizes are phase-type distributed and the model allows for possible correlations between these and the inter-claim times. The premium income is modelled by a Markov-modulated Brownian motion which may depend on the underlying phases of the point arrival process. For this risk reserve model we derive a generalised Gerber–Shiu measure that is the joint distribution of the time to ruin, the surplus immediately before ruin, the deficit at ruin, the minimal risk reserve before ruin, and the time until this minimum is attained. Numeral examples illustrate the influence of the parameters on selected marginal distributions.

Journal ArticleDOI
TL;DR: In this paper, the optimal investment problem of an insurance company in the presence of risk constraint and regime-switching using a game theoretic approach is investigated, where a dynamic risk constraint is considered where we constrain the uncertainty aversion to the true model for financial risk at a given level.
Abstract: We investigate an optimal investment problem of an insurance company in the presence of risk constraint and regime-switching using a game theoretic approach. A dynamic risk constraint is considered where we constrain the uncertainty aversion to the ‘true’ model for financial risk at a given level. We describe the surplus of an insurance company using a general jump process, namely, a Markov-modulated random measure. The insurance company invests the surplus in a risky financial asset whose dynamics are modeled by a regime-switching geometric Brownian motion. To incorporate model uncertainty, we consider a robust approach, where a family of probability measures is cosidered and the insurance company maximizes the expected utility of terminal wealth in the ‘worst-case’ probability scenario. The optimal investment problem is then formulated as a constrained two-player, zero-sum, stochastic differential game between the insurance company and the market. Different from the other works in the literature, our te...

Journal ArticleDOI
TL;DR: In this article, the conditional specification technique is applied to look for more flexible distributions than the traditional ones used in the actuarial literature, as the Poisson, negative binomial and others.
Abstract: Bivariate distributions, specified in terms of their conditional distributions, provide a powerful tool to obtain flexible distributions. These distributions play an important role in specifying the conjugate prior in certain multi-parameter Bayesian settings. In this paper, the conditional specification technique is applied to look for more flexible distributions than the traditional ones used in the actuarial literature, as the Poisson, negative binomial and others. The new specification draws inferences about parameters of interest in problems appearing in actuarial statistics. Two unconditional (discrete) distributions obtained are studied and used in the collective risk model to compute the right-tail probability of the aggregate claim size distribution. Comparisons with the compound Poisson and compound negative binomial are made.

Journal ArticleDOI
TL;DR: In this article, the authors compare the performance of the truncated generalized Pareto distribution (GPD) and the composite lognormal-Pareto (LNPa) with the folded-t7 (FT7) model.
Abstract: We thank Dr. Scollnik for reading our paper carefully and for pointing out an important issue in the numerical example of the paper. Yes, we agree that our comparison of the newly proposed model with its closest competitors was “quick” and a bit unfair to the other distributions. Indeed, using the logtransformed data instead of original data for comparing the fits of various distributions gives a homecourt advantage to the folded-t7 (FT7) model. As one can see from Tables 1 and 2 in Scollnik (2012), the logarithmic transformation changes the values of statistical performance measures for the truncated generalized Pareto distribution (GPD), as it should, and makes the GPD a much more competitive model for the data under consideration. Moreover, we see that the fit of the truncated lognormal model is borderline and that of the truncated composite lognormal-Pareto (LNPa) is excellent. Using the fminsearch function in MATLAB for finding maximum likelihood estimators, we were able to replicate (within a small margin of rounding error) all numbers in Table 2 of the discussion paper. The direct fit of the GPD to the Norwegian fire claims now clearly passes the χ2 test and the values of its (appropriately transformed) negative log-likelihood, NLL, and the Akaike information criterion (AIC) are substantially smaller. However, while the GPD looks more competitive now, it is still uniformly outperformed by the FT7 model, according to the NLL, AIC and the χ 2 criteria. Consequently, since the truncated lognormal model yields inferior fit when compared to that of the GPD, it is also uniformly outperformed by the FT7 model. Further, since the LNPa model has three parameters (all other distributions under consideration have at most two parameters), it was not viewed in our paper as one of the “closest competitors”. Nonetheless, it certainly fits the Norwegian data very well and thus merits further investigation. To this end, we first note that, under reasonable circumstances, one would expect the model with more parameters to fit the given data set better and to have a smaller NLL than a more parsimonious model. Therefore, in such situations information-based decision rules, such as the AIC and Schwarz Bayesian criterion (SBC), come in handy. According to the AIC measure, the penalty to the LNPa model for having an additional parameter is relatively small and thus the LNPa outperforms the FT7 model (AICLNPa = 1688.521 < 1690.834 = AICFT7). On the other hand, according to the SBC measure, the conclusion is opposite: the FT7 outperforms the LNPa model (SBCFT7 = 1700.270 < 1702.674 = SBCLNPa). Note that in all these comparisons the FT7 was treated as a two-parameter model, although its degrees of freedom were fixed (ν = 7). When both parameters are estimated using

Journal ArticleDOI
TL;DR: In this article, a multivariate aggregate claims model, which allows dependencies among claim numbers as well as dependencies among different types of risks is introduced, and the joint probability functions of the various types of claims are computed using the multivariate fast Fourier transform.
Abstract: Insurance companies typically face multiple sources (types) of claims. Therefore, modelling dependencies among different types of risks is extremely important for evaluating the aggregate claims of an insurer. In this paper, we first introduce a multivariate aggregate claims model, which allows dependencies among claim numbers as well as dependencies among claim sizes. For this proposed model, we derive recursive formulas for the joint probability functions of different types of claims. In addition, we extend the concept of exponential tilting to the multivariate fast Fourier transform and use it to compute the joint probability functions of the various types of claims. We provide numerical examples to compare the accuracy and efficiency of the two computation methods.

Journal ArticleDOI
TL;DR: In this article, an alternative proof to Kendall's identity for a given class of spectrally negative Levy processes, namely compound Poisson processes with diffusion, through the application of Lagrange's expansion theorem is provided.
Abstract: In this paper, we propose to revisit Kendall’s identity (see, e.g. Kendall (1957)) related to the distribution of the first passage time for spectrally negative Levy processes. We provide an alternative proof to Kendall’s identity for a given class of spectrally negative Levy processes, namely compound Poisson processes with diffusion, through the application of Lagrange’s expansion theorem. This alternative proof naturally leads to an extension of this well-known identity by further examining the distribution of the number of jumps before the first passage time. In the process, we generalize some results of Gerber (1990) to the class of compound Poisson processes perturbed by diffusion. We show that this main result is particularly relevant to further our understanding of some problems of interest in actuarial science. Among others, we propose to examine the finite-time ruin probability of a dual Poisson risk model with diffusion or equally the distribution of a busy period in a specific fluid flow model...

Journal ArticleDOI
TL;DR: This paper develops statistical models to be used as a framework for estimating, and graduating, Critical Illness insurance diagnosis rates, and derives and discusses diagnosis rates for CI claims from ‘all causes’ and also from specific causes.
Abstract: In a series of two papers, this paper and the one by Ozkok et al. (Modelling critical illness claim diagnosis rates II: results), we develop statistical models to be used as a framework for estimating, and graduating, Critical Illness (CI) insurance diagnosis rates. We use UK data for 1999–2005 supplied by the Continuous Mortality Investigation (CMI) to illustrate their use. In this paper, we set out the basic methodology. In particular, we set out some models, we describe the data available to us and we discuss the statistical distribution of estimators proposed for CI diagnosis inception rates. A feature of CI insurance is the delay, on average about 6 months but in some cases much longer, between the diagnosis of an illness and the settlement of the subsequent claim. Modelling this delay, the so-called Claim Delay Distribution, is a necessary first step in the estimation of the claim diagnosis rates and this is discussed in the present paper. In the subsequent paper, we derive and discuss diagnosis rat...

Journal ArticleDOI
TL;DR: In this paper, the optimal investment strategies for minimizing the probability of lifetime ruin under borrowing and short-selling constraints are found, where the investor withdraws money from the portfolio at a constant rate proportional to the portfolio value.
Abstract: In this paper, the optimal investment strategies for minimizing the probability of lifetime ruin under borrowing and short-selling constraints are found. The investment portfolio consists of multiple risky investments and a riskless investment. The investor withdraws money from the portfolio at a constant rate proportional to the portfolio value. In order to find the results, an auxiliary market is constructed, and the techniques of stochastic optimal control are used. Via this method, we show how the application of stochastic optimal control is possible for minimizing the probability of lifetime ruin problem defined under an auxiliary market.