scispace - formally typeset
Search or ask a question

Showing papers by "Jens Perch Nielsen published in 2009"


Journal ArticleDOI
TL;DR: In this paper, a new canonical parametrisation is proposed to circumvent the inherent identification problem in the parametrization, and the maximum likelihood estimators for the canonical parameter are simple, interpretable and easy to derive.
Abstract: It has long been known that maximum likelihood estimation in a Poisson model reproduces the chain-ladder technique. We revisit this model. A new canonical parametrisation is proposed to circumvent the inherent identification problem in the parametrisation. The maximum likelihood estimators for the canonical parameter are simple, interpretable and easy to derive. The boundary problem where all observations in one particular development year or on particular underwriting year is zero is also analysed.

53 citations


Journal ArticleDOI
TL;DR: In this paper, a tailor-made semiparametric asymmetric kernel density estimator was developed for the estimation of actuarial loss distributions. The estimator is obtained by transforming the data with the generalized Champernowne distribution initially fitted to the data.
Abstract: We develop a tailor-made semiparametric asymmetric kernel density estimator for the estimation of actuarial loss distributions. The estimator is obtained by transforming the data with the generalized Champernowne distribution initially fitted to the data. Then the density of the transformed data is estimated by use of local asymmetric kernel methods to obtain superior estimation properties in the tails. We find in a vast simulation study that the proposed semiparametric estimation procedure performs well relative to alternative estimators. An application to operational loss data illustrates the proposed method.

45 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider Danish insurance business lines for which the pricing methodology has been dramatically upgraded recently, and they show that experience rating improves this sophisticated pricing method as much as it originally improved pricing compared with a trivial flat rate.
Abstract: This article considers Danish insurance business lines for which the pricing methodology has been dramatically upgraded recently. A costly affair, but nevertheless, the benefits greatly exceed the costs; without a proper pricing mechanism, you are simply not competitive. We show that experience rating improves this sophisticated pricing method as much as it originally improved pricing compared with a trivial flat rate. Hence, it is very important to take advantage of available customer experience. We verify that recent developments in multivariate credibility theory improve the prediction significantly, and we contribute to this theory with new robust estimation methods for time (in-)dependency. INTRODUCTION In this article, credibility theory and experience rating mean more or less the same thing; however, strictly speaking, credibility theory describes a theoretical model with a latent risk variable, whereas experience rating is the act of including observed experience in the rating process. This latter act is sometimes carried out in non-life insurance companies without a consistent theoretical model behind it. But since all experience rating in this article is based on a theoretical model, we can more or less use the two expressions interchangeably. Credibility theory has a long tradition in actuarial science; we show in this article that there is indeed a good reason for this. In our concrete application to Danish commercial business lines, we show that the use of experience rating is as important as the use of pricing as such. In other words, we double the quality of the price rating by the inclusion of credibility theory in the rating process. We also consider the recently developed method of multivariate experience rating in which the latent risk parameter is allowed to be multidimensional such that each dimension represents one cover from the business line (see Englund et al., 2008). We also introduce a method to estimate a time effect of this model. We show that this more general version of credibility theory gives better results than the results from the classical one-dimensional credibility theory. We follow the standard approach of actuarial practitioners and only use frequency information in our credibility approach. However, the severity of experience claims should contain some valuable information as well, indicating that there might be even more to gain from credibility theory if a robust and stable credibility method is developed incorporating severity information in the experience rating. An early beginning of credibility theory appeared in Mowbray (1914) and Whitney (1918). After the elegant approach presented by Bfihlmann (1967) and Buhlmann and Straub (1970), a large number of extensions have been derived. References can be made to Jewell (1974), Hachemeister (1975), Sundt (1979, 1981), and Zehnwirth (1985). See also Halliwell (1996), Greig (1999), and Bfihlmann and Gisler (2005) for more comprehensive surveys. Evolutionary models are not new in credibility theory. The idea is that recent claim information is more valuable than old claim information. This approach was introduced in the 1970s for one-dimensional credibility models (see Gerber and Jones, 1975a,b; De Vylder, 1976). Much of the work on the time-dependent models focused on credibility formulas of the updating type. These recursive estimators were introduced by Mehra (1975) for credibility applications and further developed by De Vylder (1977), Sundt (1981), and Kremer (1982). For the time dependence in this article, we use a multivariate generalization of the recursive credibility estimator of Sundt in which the risk parameter itself is modeled as an autoregressive process. The article is organized as follows. In "Multidimensional Credibility Theory," we state the credibility model and the estimators in our multidimensional setup. The model is generalized in "Evolutionary Effects" to incorporate an evolutionary effect, and a recursive credibility estimator is stated. …

23 citations


Journal ArticleDOI
TL;DR: In this article, a class of local linear kernel density estimators based on weighted least-squares kernel estimation is considered within the framework of Aalen's multiplicative intensity model, which allows for truncation and/or censoring in addition to accommodating unusual patterns of exposure as well as occurrence.
Abstract: A class of local linear kernel density estimators based on weighted least-squares kernel estimation is considered within the framework of Aalen's multiplicative intensity model. This model includes the filtered data model that, in turn, allows for truncation and/or censoring in addition to accommodating unusual patterns of exposure as well as occurrence. It is shown that the local linear estimators corresponding to all different weightings have the same pointwise asymptotic properties. However, the weighting previously used in the literature in the i.i.d. case is seen to be far from optimal when it comes to exposure robustness, and a simple alternative weighting is to be preferred. Indeed, this weighting has, effectively, to be well chosen in a ‘pilot’ estimator of the survival function as well as in the main estimator itself. We also investigate multiplicative and additive bias-correction methods within our framework. The multiplicative bias-correction method proves to be the best in a simulation study c...

15 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate a class of semiparametric models for panel datasets where the cross-section and time dimensions are large, and they propose estimation procedures based on local linear kernel smoothing; their estimators are all explicitly given.
Abstract: In this paper we investigate a class of semiparametric models for panel datasets where the cross-section and time dimensions are large. Our model contains a latent time series that is to be estimated and perhaps forecasted along with a nonparametric covariate effect. Our model is motivated by the need to be flexible with regard to functional form of covariate effects but also the need to be practical with regard to forecasting of time series effects. We propose estimation procedures based on local linear kernel smoothing; our estimators are all explicitly given. We establish the pointwise consistency and asymptotic normality of our estimators. We also show that the effects of estimating the latent time series can be ignored in certain cases.

10 citations


Book Chapter
01 Jan 2009
TL;DR: An extensive simulation study confirms that the one-sided cross-validation clearly outperforms the simple cross validation and the superiority of this new method is even much stronger.
Abstract: We introduce one-sided cross-validation to nonparametric kernel density estimation. The method is more stable than classical cross-validation and it has a better overall performance comparable to what we see in plug-in methods. One-sided cross-validation is a more direct data driven method than plug-in methods with weaker assumptions of smoothness since it does not require a smooth pilot with consistent second derivatives. Our conclusions for one-sided kernel density cross-validation are similar to the conclusions obtained by Hart and Yi (1998) when they introduced one-sided cross-validation in the regression context, except that in our context of density estimation the superiority of this new method is even much stronger. An extensive simulation study confirms that our one-sided cross-validation clearly outperforms the simple cross validation. We conclude with real data applications.1

9 citations


Posted Content
TL;DR: In this paper, a new canonical parametrisation is proposed to circumvent the inherent identification problem in the parametrization, and the maximum likelihood estimators for the canonical parameter are simple, interpretable and easy to derive.
Abstract: It has long been known that maximum likelihood estimation in a Poisson model reproduces the chain-ladder technique. We revisit this model. A new canonical parametrisation is proposed to circumvent the inherent identification problem in the parametrisation. The maximum likelihood estimators for the canonical parameter are simple, interpretable and easy to derive. The boundary problem where all observations in one particular development year or on particular underwriting year is zero is also analysed.

7 citations


Book Chapter
01 Jan 2009
TL;DR: One-sided cross-validation is a more direct date driven method than plugin methods with weaker assumptions of smoothness since it does not require a smooth pilot with consistent second derivatives.
Abstract: We introduce one-sided cross-validation to nonparametric kernel density estimation. The method is more stable than classical cross-validation and it has a better overall performance comparable to what we see in plug-in methods. One-sided cross-validation is a more direct date driven method than plugin methods with weaker assumptions of smoothness since it does not require a smooth pilot with consistent second derivatives. Our conclusions for one-sided kernel density cross-validation are similar to the conclusions obtained by Hart and Li (1998) when they introduced one-sided cross-validation in the regression context. An extensive simulation study conms that our one-sided cross-validation clearly outperforms the simple cross validation. We conclude with real data applications.

3 citations


Posted Content
Abstract: Customer loyalty is one of the main business challenges, also for the insurance sector. Nevertheless, there are just a few papers dealing with this problem in the insurance field and specifically considering the uniqueness of this business sector. In this paper we define the conceptual framework for studying this problem in insurance and we propose a methodology to address it. With our methodological approach, it is possible to estimate the probability that a household with more than one insurance contract (policy) in the same insurance company (cross buying) would cancel all policies simultaneously. For those who cancel part of their policies, but not all of them, we estimate the time they are going to stay in the company after that first policy cancellation, that is to say, the time the company has to try to retain a customer who has just given them a clear signal of leaving the company. Additionally, in this paper we present and discuss the results obtained when applying our methodology to a policy cancellation dataset provided by a Danish insurance company, and we outline some conclusions regarding the factors associated to a higher or lower customer loyalty.

2 citations


Journal Article
Abstract: Resumen La fidelización de clientes es uno de los principales retos a los que se enfrenta el sector empresarial, entre ellos el asegurador. No obstante, existen pocos trabajos que traten específicamente este problema dentro del sector asegurador atendiendo a su singularidad dentro del mundo empresarial. En este trabajo se define el marco conceptual para el estudio del problema en el ámbito asegurador y se propone una metodología para su tratamiento. Con ella, es posible estimar la probabilidad de que un hogar con más de un contrato de seguro (póliza) en la misma compañía aseguradora (compra cruzada) cancele todas ellas a la vez. Del mismo modo, para los que cancelan parte de sus pólizas, pero no todas, permite analizar el tiempo que van a permanecer como clientes tras su primera cancelación, es decir, el tiempo que la compañía tiene para intentar retener al asegurado que acaba de dar una señal clara de que desea marcharse de la compañía. Asimismo, en este trabajo también se presentan y discuten los resultados obtenidos al aplicar esta metodología a una base de datos de cancelaciones proporcionada por una compañía aseguradora danesa, y se extraen conclusiones sobre los factores que inciden en una mayor o menor fidelidad del asegurado.