scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Actuarial Journal in 1981"


Journal ArticleDOI
TL;DR: In this paper, the authors consider the model that was introduced in Gerber (1974) and consider the compound Poisson process of the aggregate claims, given by the Poisson parameter λ.
Abstract: 1. Introduction and Summary We shall consider the model that was introduced in Gerber (1974). Let {St } denote the compound Poisson process of the aggregate claims (given by the Poisson parameter λ...

82 citations


Journal ArticleDOI
Bjørn Sundt1
TL;DR: In this paper, it is shown that when updating the credibility estimators, they get their estimation errors updated as a by-product, and the models treated include cases with an unknown underlying random parameter that develops over time.
Abstract: The paper treats models in which credibility estimators may be updated recursively as time passes. It is shown that when updating the credibility estimators, we get their estimation errors updated as a by-product. The models treated include cases with an unknown underlying random parameter that develops over time.

48 citations


Journal ArticleDOI
TL;DR: In this paper, two characterizations of mixtures of Esscher principles in terms of additivity combined with other conditions are given, and two conditions for additivity and additivity are defined.
Abstract: Two characterizations of mixtures of Esscher principles in terms of additivity combined with other conditions are given.

42 citations


Journal ArticleDOI
TL;DR: In this article, a natural method for extending the smoothing to the extremities of the data as a single overall matrix-vector operation having a well defined structure, rather than as something extra grafted on at the ends, was developed.
Abstract: The use of a symmetrical moving weighted average of 2m + 1 terms to smooth equally spaced observations of a function of one variable does not yield smoothed values of the first m and the last m observations, unless additional data beyond the range of the original observations are available. By means of analogies to the Whittaker smoothing process and some related mathematical concepts, a natural method is developed for extending the smoothing to the extremities of the data as a single overall matrix-vector operation having a well defined structure, rather than as something extra grafted on at the ends. The matrix approach is shown to be equivalent to an extrapolation algorithm.

21 citations


Journal ArticleDOI
TL;DR: In this article, the authors obtained the rate of strong convergence of the estimator T(fn )-T(f) under mild assumptions on f, and established asymptotic normality with associated Berry-Esseen rates.
Abstract: For specified functions φ and ψ and unknown distribution function F with density f, the efficacy-related parameter T(f) = ∫ φ(x)ψ(F(x))f 2(x)dx may be estimated by the sample analogue estimator T(fn ) based on an empirical density estimator fn . For {Xi } i.i.d. F and fn of the form fn (x) = n -1 , we approximate the estimation error T(fn ) - T(f) by the Gateaux derivative of the functional T(·) at the “point” f with increment fn -f. In conjunction with stochastic properties of the L 2-norm ‖fn -f‖, this approach leads to characterizations of the stochastic behavior of T(fn )-T(f). In particular, under mild assumptions on f, we obtain the rate of strong convergence T(fn )-T(f)=a.s. O(n-1/2(log n)1/2), which significantly improves previous results in the literature. Also, we establish asymptotic normality with associated Berry-Esseen rates.

16 citations


Journal ArticleDOI
TL;DR: In this article, a recurrence relation for the moments of order statistics from a two-parameter Weibull distribution is established, which enables us to obtain all the single moments of the order statistics for which exact and explicit expressions are given by Lieblein (1955).
Abstract: Summary We establish a recurrence relation for the moments of order statistics from a two-parameter Weibull distribution. This enables us to obtain all the single moments of order statistics for which exact and explicit expressions are given by Lieblein (1955).

11 citations


Journal ArticleDOI
TL;DR: In this article, moments for the random variable Sn, the amount at the end of n periods of 1 per period in advance for n periods, given that the rate of interest for each period varies according to a second-order autoregressive process, are calculated.
Abstract: 1. Introduction 1.1. Moments for the random variable Sn , the amount at the end of n periods of 1 per period in advance for n periods, given that the rate of interest for each period varies according to a second-order autoregressive process, are calculated. These results are compared with those obtained by Wilkie (1978) using simulation.

10 citations


Journal ArticleDOI
TL;DR: In this article, an identity involving the product moments of order statistics from normal and generalized truncated normal distributions is obtained for obtaining an expression for the variance of standardized and studentized selection differential.
Abstract: An identity involving the product moments of order statistics from normal and generalized truncated normal distributions is obtained. It is applied for obtaining an expression for the variance of standardized and studentized selection differential. The mean and variance of standardized selection differential from standard normal distribution for n⩽50 are also tabulated.

8 citations


Journal ArticleDOI
TL;DR: In this paper, a semi-markovian generalization of a well-known game of economic survival is studied, where a firm has an initial capital x, the profits resulting from successive time intervals (n, n + 1) are random variables; at each instant n the firm may pay dividends to the shareholders, his only goal being to maximize the expected discounted value of all dividends paided before the ruin.
Abstract: This paper is devoted to a semi-markovian generalization of a well-known game of economic survival: a firm has an initial capital x, the profits resulting from the successive time intervals (n, n + 1) are random variables; at each instant n the firm may pay dividends to the shareholders, his only goal being to maximize the expected discounted value of all dividends payed before the ruin. The existence of an optimal stationary strategy is proved; the only “impatient” optimal stationary strategy is a “band-strategy”. In the last part of the paper we construct an algorithm producing the optimal band-strategy after a finite number of iterations, and we show how to calculate the ruin-probabilities associated with this strategy.

7 citations


Journal ArticleDOI
TL;DR: In this paper, the optimal strata boundaries for proportional stratified samples, optimal group limits for grouped samples, and optimal spacing for censored samples of selected order statistics are related under certain conditions.
Abstract: It is shown that under certain conditions the optimal strata boundaries for proportional stratified samples, optimal group limits for grouped samples and optimal spacing for censored samples of selected order statistics are related. So that theoretically the optimal dividing points for one type of sample can be obtained from those for any type of sample. Two related problems on optimal condensation of observations into groups and three group simple regression are also considered.

7 citations


Journal ArticleDOI
TL;DR: In this article, a large sample estimation procedure is used which employs the asymptotically best linear unbiased estimators based on sample quantiles for estimating the mean, standard deviation, and quantiles of the logistic distribution.
Abstract: This paper considers the problems of estimating the mean, standard deviation, and quantiles for the logistic distribution. A large sample estimation procedure is utilized which employs the asymptotically best linear unbiased estimators based on sample quantiles. An approximate procedure for optimal spacing selection for the sample quantiles is presented and its properties are discussed.

Journal ArticleDOI
TL;DR: In this paper, the Pareto distribution is shown to be a generalized Γ-convolution, based on an old stability property of the Stieltjes cone due to Reuter and Ito.
Abstract: In 1977 Thorin introduced the class of generalized Γ-convolutions and proved that the Pareto distribution belongs to this class. In subsequent papers Thorin, Bondesson and others have shown that several important distributions are generalized Γ-convolutions. The purpose of the present paper is to provide a very short proof of Thorin's result that the Pareto distribution is a generalized Γ-convolution. The proof is based on an old stability property of the Stieltjes cone due to Reuter (1956) and Ito (1974).

Journal ArticleDOI
TL;DR: In this article, the authors examined the various statements made by Kimball and Chiang based on the proportionality assumption and provided mathematical proofs under more general assumptions, and analyzed the relation between the variations in the crude probabilities and the variations of the intensity functions.
Abstract: Chiang (1961, 1968) was the first to give a systematic account on the probability of death from specific causes in the presence of competing risks under what is called “proportionality assumption” i.e. µi (t)|µ(t) is independent of t for each i. He studied the relationship between the partial crude and the corresponding crude probabilities and developed their estimates. The theory of competing risks has also been applied to the statistical studies of life testing and medical follow-up. The proportionality assumption was later attacked by Kimball (1969) and defended by Chiang (1970). This paper examines the various statements made by Kimball and Chiang based on the proportionality assumption. These statements are modfied and mathematical proofs under more general assumptions are supplied. Also the relation between the variations in the crude probabilities and the variations in the intensity functions is analyzed.

Journal ArticleDOI
TL;DR: The expected value part is the Bayes rule for the mean relative to squared error loss and its linearization does not present any problems in terms of estimating the so-called structural parameters as mentioned in this paper.
Abstract: Introduction and summary Much of the recent literature in credibility theory has concentrated on the expected value part of Buhlmann's (1970) decomposition of the credibility premium. The expected value part is the Bayes rule for the mean relative to squared error loss and its linearization does not present any problems in terms of estimating the so-called structural parameters.

Journal ArticleDOI
TL;DR: In this paper, the weak convergence of stochastic processes whose law of motion changes is considered, and it is shown that if the input process is a compound Poisson process that converges weakly to Brownian motion with drift, then the alternating process converges to a diffusion process with similar law.
Abstract: The weak convergence of stochastic processes whose law of motion changes, is considered. The law of motion consists of two parts, the first depends on the processes' present value and the other is the input process. It is shown that if the input process is a compound Poisson process that converges weakly to Brownian motion with drift, then the alternating process converges to a diffusion process with similar law of motion.

Journal ArticleDOI
TL;DR: In this paper, an alternative approach to the variance principle of premium determination is explored, which rationalises the principle in terms of an economic theory and formalises the notion that loadings in addition to the "fair" premium are related to competition and expenses.
Abstract: An alternative approach to the variance principle of premium determination is explored. The approach rationalises the principle in terms of an economic theory and formalises the notion that loadings in addition to the ‘fair’ premium are related to competition and expenses.

Journal ArticleDOI
TL;DR: In this article, the authors analyzed the stop-loss premium II(c, t) for the standard risk model in which the number of claims in (0, t] is Poisson with intensity 1 and the claimsize distribution F is on (-∞, +∞) with mean µ>0.
Abstract: This paper analyses the stop-loss premium II(c, t) for the standard risk model in which the number of claims in (0, t] is Poisson with intensity 1 and the claimsize distribution F is on (-∞, +∞) with mean µ>0. It is shown that, under typical conditions on F, II(c, t)=µt-c+R(c, t) where R(c, t) tends to zero exponentially fast as t tends to infinity. The precise behaviour of R(c, t) is established.

Journal ArticleDOI
TL;DR: Some principles will be presented for calculating premiums of waiver of recurrent single premiums (“natural” premiums) according to different technical systems for handling the disability risk, primarily made to the paper by Hoem, Riis & Sand (1971).
Abstract: When group life insurance with recurrent single premiums was introduced 30 years ago in Sweden waiver of premium was not included. Later, it has been included against an additional premium determined as a simple proportional loading. In this note some principles will be presented for calculating premiums of waiver of recurrent single premiums (“natural” premiums) according to different technical systems for handling the disability risk. Reference is primarily made to the paper by Hoem, Riis & Sand (1971). In order to avoid difficulties of the notations we assume here the deferment or qualifying period to be zero and that the insurance only includes complete disablement (degree of disability equal to 1 ).