scispace - formally typeset
Search or ask a question
Author

Jens Perch Nielsen

Other affiliations: Codan, University of Copenhagen, University of London  ...read more
Bio: Jens Perch Nielsen is an academic researcher from City University London. The author has contributed to research in topics: Estimator & Kernel density estimation. The author has an hindex of 37, co-authored 195 publications receiving 4574 citations. Previous affiliations of Jens Perch Nielsen include Codan & University of Copenhagen.


Papers
More filters
Journal ArticleDOI
TL;DR: In this paper, a simple kernel procedure based on marginal integration that estimates the relevant univariate quantity in both additive and multiplicative nonparametric regression is defined, which is used as a preliminary diagnostic tool.
Abstract: SUMMARY We define a simple kernel procedure based on marginal integration that estimates the relevant univariate quantity in both additive and multiplicative nonparametric regression Nonparametric regression is frequently used as a preliminary diagnostic tool It is a convenient method of summarising the relationship between a dependent and a univariate independent variable However, when the explanatory variables are multidimensional, these methods are less satisfactory In particular, the rate of convergence of standard estimators is poorer, while simple plots are not available to aid model selection There are a number of simplifying structures that have been used to avoid these problems These include the regression tree structure of Gordon & Olshen (1980), the projection pursuit model of Friedman & Stuetzle (1981), semiparametric models such as considered

553 citations

Journal ArticleDOI
TL;DR: In this paper, a new method for bias reduction in nonparametric density estimation is proposed, which is a simple, two-stage multiplicative bias correction, and its theoretical properties are investigated, and simulations indicate its practical potential.
Abstract: A new method for bias reduction in nonparametric density estimation is proposed. The method is a simple, two-stage multiplicative bias correction. Its theoretical properties are investigated, and simulations indicate its practical potential. The method is easy to compute and to analyse, and extends simply to multivariate and other estimation problems.

178 citations

Journal ArticleDOI
TL;DR: In this paper, a unified approach to the estimation of loss distributions is presented, which involves determining the threshold level between large and small losses, and then estimating the density of the transformed data by use of the classical kernel density estimator.
Abstract: When estimating loss distributions in insurance, large and small losses are usually split because it is difficult to find a simple parametric model that fits all claim sizes. This approach involves determining the threshold level between large and small losses. In this article a unified approach to the estimation of loss distributions is presented. We propose an estimator obtained by transforming the data set with a modification of the Champernowne cdf and then estimating the density of the transformed data by use of the classical kernel density estimator. We investigate the asymptotic bias and variance of the proposed estimator. In a simulation study, the proposed method shows a good. performance. We also present two applications dealing with claims costs in insurance.

126 citations

Journal ArticleDOI
TL;DR: In this article, a unified approach to the estimation of loss distributions is presented, which involves determining the threshold level between large and small losses, and then estimating the density of the transformed data by use of the classical kernel density estimator.
Abstract: When estimating loss distributions in insurance, large and small losses are usually split because it is difficult to find a simple parametric model that fits all claim sizes. This approach involves determining the threshold level between large and small losses. In this article, a unified approach to the estimation of loss distributions is presented. We propose an estimator obtained by transforming the data set with a modification of the Champernowne cdf and then estimating the density of the transformed data by use of the classical kernel density estimator. We investigate the asymptotic bias and variance of the proposed estimator. In a simulation study, the proposed method shows a good performance. We also present two applications dealing with claims costs in insurance.

108 citations

Journal ArticleDOI
TL;DR: In this paper, a new class of local linear hazard estimators based on weighted least square kernel estimation is considered, and a new bias correction technique based on bootstrap estimation of additive bias is proposed.
Abstract: A new class of local linear hazard estimators based on weighted least square kernel estimation is considered. The class includes the kernel hazard estimator of Ramlau-Hansen (1983), which has the same boundary correction property as the local linear regression estimator (see Fan & Gijbels, 1996). It is shown that all the local linear estimators in the class have the same pointwise asymptotic properties. We derive the multiplicative bias correction of the local linear estimator. In addition we propose a new bias correction technique based on bootstrap estimation of additive bias. This latter method has excellent theoretical properties. Based on an extensive simulation study where we compare the performance of competing estimators, we also recommend the use of the additive bias correction in applied work.

98 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: Convergence of Probability Measures as mentioned in this paper is a well-known convergence of probability measures. But it does not consider the relationship between probability measures and the probability distribution of probabilities.
Abstract: Convergence of Probability Measures. By P. Billingsley. Chichester, Sussex, Wiley, 1968. xii, 253 p. 9 1/4“. 117s.

5,689 citations

Journal ArticleDOI
TL;DR: In this article, a rigorous distribution theory for kernel-based matching is presented, and the method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic.
Abstract: This paper develops the method of matching as an econometric evaluation estimator. A rigorous distribution theory for kernel-based matching is presented. The method of matching is extended to more general conditions than the ones assumed in the statistical literature on the topic. We focus on the method of propensity score matching and show that it is not necessarily better, in the sense of reducing the variance of the resulting estimator, to use the propensity score method even if propensity score is known. We extend the statistical literature on the propensity score by considering the case when it is estimated both parametrically and nonparametrically. We examine the benefits of separability and exclusion restrictions in improving the efficiency of the estimator. Our methods also apply to the econometric selection bias estimator. Matching is a widely-used method of evaluation. It is based on the intuitively attractive idea of contrasting the outcomes of programme participants (denoted Y1) with the outcomes of "comparable" nonparticipants (denoted Y0). Differences in the outcomes between the two groups are attributed to the programme. Let 1 and 11 denote the set of indices for nonparticipants and participants, respectively. The following framework describes conventional matching methods as well as the smoothed versions of these methods analysed in this paper. To estimate a treatment effect for each treated person iecI, outcome Yli is compared to an average of the outcomes Yoj for matched persons je10 in the untreated sample. Matches are constructed on the basis of observed characteristics X in Rd. Typically, when the observed characteristics of an untreated person are closer to those of the treated person ieI1, using a specific distance measure, the untreated person gets a higher weight in constructing the match. The estimated gain for each person i in the treated sample is

3,861 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

Book
16 Oct 2005
TL;DR: The most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management can be found in this paper, where the authors describe the latest advances in the field, including market, credit and operational risk modelling.
Abstract: This book provides the most comprehensive treatment of the theoretical concepts and modelling techniques of quantitative risk management. Whether you are a financial risk analyst, actuary, regulator or student of quantitative finance, Quantitative Risk Management gives you the practical tools you need to solve real-world problems. Describing the latest advances in the field, Quantitative Risk Management covers the methods for market, credit and operational risk modelling. It places standard industry approaches on a more formal footing and explores key concepts such as loss distributions, risk measures and risk aggregation and allocation principles. The book's methodology draws on diverse quantitative disciplines, from mathematical finance and statistics to econometrics and actuarial mathematics. A primary theme throughout is the need to satisfactorily address extreme outcomes and the dependence of key risk drivers. Proven in the classroom, the book also covers advanced topics like credit derivatives. Fully revised and expanded to reflect developments in the field since the financial crisis Features shorter chapters to facilitate teaching and learning Provides enhanced coverage of Solvency II and insurance risk management and extended treatment of credit risk, including counterparty credit risk and CDO pricing Includes a new chapter on market risk and new material on risk measures and risk aggregation

2,580 citations