scispace - formally typeset
Search or ask a question

Showing papers in "Scandinavian Actuarial Journal in 1956"


Journal ArticleDOI
TL;DR: In this paper, the authors have studied the efficiency of various methods of estimating the force of mortality and showed that the most efficient of these is, at least for large samples, the one given by the maximum likelihood method, and the rest of them have to be compared to this best estimate.
Abstract: 2.1. Limitations of the parametric methods. In the previous sections we have studied the efficiency of various methods of estimating the force of mortality. The most efficient of these is, at least for large samples, the one given by the maximum likelihood method, and the rest of them have to be compared to this best estimate. However in this discussion the notion of efficiency is based on the assumption that the mortality intensity can be expressed by Makeham’s formula. How realistic is this in practice?

524 citations


Journal ArticleDOI
TL;DR: In this article, a sequence of independent random variables (r.v.) with the same distribution function (d.f.) is considered, and it is shown that Φ(x) is the normal d.f.
Abstract: Consider a sequence of independent random variables (r.v.) X 1 X 2, …, Xn , … , with the same distribution function (d.f.) F(x). Let E (Xn ) = 0, E , E (ϕ(X)) denoting the mean value of the r.v. ϕ (X). Further, let the r.v. where have the d.f. F n (x). It was proved by Berry [1] and the present author (Esseen [2], [4]) that Φ(x) being the normal d.f.

148 citations


Journal ArticleDOI
TL;DR: For all real values of α and λ satisfying α + λ > 0, α ≠ 0, λ ≠ 1, α > 0 as mentioned in this paper, the following inequality involving the Gamma function holds: This follows from a general inequality for the variance of regular unbiased estimators given by Cramer.
Abstract: 1. Summary For all real values of α and λ satisfying α + λ > 0, α ≠ 0, α ≠ 1, λ > 0 the following inequality involving the Gamma function holds: This follows from a general inequality for the variance of regular unbiased estimators given by Cramer [1].

49 citations


Journal ArticleDOI
TL;DR: The need for a ≫smooth≪ test of goodness of fit has long been felt by statisticians as discussed by the authors, and J. Neyman (1937) put forward his tests.
Abstract: 1. The need for a ≫smooth≪ test of goodness of fit has long been felt by statisticians. In response to this J. Neyman (1937) put forward his tests. As he showed, they have many of the properties required of smooth tests. In spite of this they have not come into general use.

32 citations


Journal ArticleDOI
J. R. Blum1
TL;DR: In this paper, the normal distribution is characterized by a simple identity between characteristic functions, and two consequences of this characterization are obtained: 1) The normal distribution can be characterized by the identity of the characteristic functions and 2)
Abstract: 1. The normal distribution is shown to be characterized by a simple identity between characteristic functions. Two consequences of this characterization are obtained.

13 citations


Journal ArticleDOI
TL;DR: In this paper, a well-known problem in the theory of sampling can be reformulated as a problem of linear programming, which is similar to the problem we consider in this paper.
Abstract: The purpose of this paper is to indicate how a well-known problem in the theory of sampling can be reformulated as a problem of linear programming. The problem is often given in the following form:

12 citations


Journal ArticleDOI
TL;DR: In this paper, the authors define the independent parameter of a collective risk process, denoted by τ, with the origin at the point of departure of the process and on a scale independent of the number of expected changes of the random function.
Abstract: 1. For the definition of general processes with special regard to those concerned in Collective Risk Theory reference is made to Cramer (Collective Risk Theory, Skandia Jubilee Volume, Stockholm, 1955). Let the independent parameter of such a process be denoted by τ, with the origin at the point of departure of the process and on a scale independent of the number of expected changes of the random function. Denote with p(τ, n)dt the asymptotic expression for the conditional probability of one change in the random function while the parameter passes from τ to τ + dτ: relative to the hypothesis that n changes have occurred, while the parameter passes from 0 to τ. Assume further—unless the contrary is stated—that the probability of more than one change, while the parameter passes from τ to τ + dτ, is of smaller order than dτ.

8 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the problem of determining whether an event may or may not occur in a fixed number of independent trials, where the probability of the occurrence of an event is constant from trial to trial, i.e. the probability that the event occurs a total of k times among the n trials.
Abstract: The following situation is considered. A fixed number (= n) or sequence of independent trials T 1 T 2,…, T n is given, and in each of these an event E mayor may not occur, It is further observed that the event E occurs a total of k times amongst the n trials T i , (i = l,…, n). It is then required to test the hypothesis H 0 that the probability of the occurrence of E is constant from trial to trial, i.e. H 0 is the hypothesis: p 1 = p 2 = ⋯ = p n = p, if p n (i = 1, …, n) represents the probability that E occurs on the ith trial.

6 citations


Journal ArticleDOI
TL;DR: In this paper, a sample analog of the percentage point estimation procedure has been proposed for life testing situations, where the first time at which a specified number of additional items of a sample will have failed can be predicted from the values of the items which have already failed.
Abstract: Consider a set of observations which are obtained by truncating a sample of known size. The truncation procedure consists of deleting a known number of the largest sample values and a known number of the smallest sample values. One problem considered is the use of this data to estimate certain of the population percentage points for which the corresponding sample data was deleted. Another problem is to estimate the population mean and standard deviation. This paper presents solutions to these problems which are valid for a rather general class of continuous statistical populations. The results obtained should be applicable to most practical cases of a continuous type. A sample analog of the percentage point estimation procedure has interesting uses for life testing situations. Namely, the first time at which a specified number of additional items of a sample will have failed can be predicted from the values of the items which have already failed.

6 citations


Journal ArticleDOI
TL;DR: In Sweden, long-term sickness and disability insurance has been carried on in Sweden since the beginning of the twentieth century; first by Eir since 1911, then by Valkyrian from 1912, and by Salus, a special company for physicians, since 1929 as mentioned in this paper.
Abstract: Non-cancellable sickness and disability insurance—in Sweden known as long-term sickness insurance—has been carried on in Sweden since the beginning of the twentieth century; first by Eir since 1911, then by Valkyrian from 1912, and by Salus, a special company for physicians, since 1929. An account of the technical methods employed by Eir in sickness insurance is given in a paper which was read before the Ninth International Congress of Actuaries in 1930.1 In several important respects a new epoch has been established as regards sickness insurance in Sweden. On 1 January 1955 compulsory sickness insurance was introduced; and thus an essential part of the demand for sickness insurance was covered. At the same time three of the life assurance companies, Thule, Svenska Liv, and Stadernas Liv have also begun to carryon the type of sickness insurance which had previously been effected only by the three companies mentioned above, and whose activities are restricted to sickness insurance. The apprehensio...

5 citations


Journal ArticleDOI
TL;DR: In this paper, the authors considered the use of a transformation to bring the Poisson distribution into approximate normality, and then estimated the poisson parameter by means of the maximum likelihood solution for the truncated or censored normal distribution.
Abstract: 1. Several authors have given methods for the estimation of the parameter of a truncated Poisson distribution. Tippett (1932) gave the maximum likelihood equation, which is difficult to solve, together with a nomogram to aid the solution. Approximate methods have been suggested by Moore (1952, 1954), Plackett (1953) and Rider (1953). These approximations are all based on the moments or the incomplete moments of the portion of the distribution remaining. In this paper we will consider the use of a transformation to bring the Poisson distribution into approximate normality. The Poisson parameter may then be estimated by means of the maximum likelihood solution for the truncated or censored normal distribution.

Journal ArticleDOI
TL;DR: In this article, the author gave tables of the functions and where ϕ(x) denotes the normal law of distribution, φ(x), its integral and ϕ′(x)) its first derivative, with the aid of these tables it is practicable to solve the maximum likelihood equations for coarsely grouped normal observations.
Abstract: 1. Introduction (a) Maximum Likelihood.—In a previous paper (THIS JOURNAL, vol. XXXII, 1949, pp. 135–159) the author gave tables of the functions and where ϕ(x) denotes the normal law of distribution, φ(x) its integral and ϕ′(x) its first derivative. With the aid of these tables it is practicable to solve the maximum likelihood equations for coarsely grouped normal observations. The procedure was illustrated by examples.

Journal ArticleDOI
TL;DR: In this article, the binomial distribution was shown to be valid under rather general conditions for the case of a large number of statistically independent lives with possibly unequal mortality probabilities, and an analysis is presented which indicates that these situations include most actuarial applications involving a large amount of lives.
Abstract: In a previous paper (see reference [1]), the binomial distribution was shown to be valid under rather general conditions for the case of a large number of statistically independent lives with possibly unequal mortality probabilities. This note extends these results to some situations where the lives are not necessarily statistically independent. An analysis is presented which indicates that these situations include most actuarial applications involving a large number of lives.

Journal ArticleDOI
TL;DR: In view of the current practice by many institutions of not crediting any interest accruing during a given period until the end of that period, it is proper to consider discrete interest functions as step functions.
Abstract: In view of the current practice by many institutions of not crediting any interest accruing during a given period until the end of that period, it is proper to consider discrete interest functions as step functions.


Journal ArticleDOI
TL;DR: In practice, constant rates are no rule in practice as discussed by the authors, and the assumption of constant interest during several years and thereafter payment of the amount borrowed, but nowadays clauses are as a rule admitted giving the debtor right of conversion or repayment after a certain period.
Abstract: 1. The Unnatural Hypothesis of a Constant Rate of Interest There are loan contracts which assume a constant interest during several years and thereafter payment of the amount borrowed, but nowadays clauses are as a rule admitted giving the debtor right of conversion or repayment after a certain period, generally ten years. Low interest loans can be considered as perpetuities from a practical point of view, as long as no possibility is meant to exist that the market rate will fall under their nominal rate. Such a loan—as e.g. Consols—with the nominal rate i 0 ought to be valued at a discount if the market rate is higher, say i > i 0, the value being equal to the fraction i 0 : i. But constant rates are no rule in practice.

Journal ArticleDOI
TL;DR: In statistical studies, it is frequently convenient to represent a bivariate population (X, Y) by a straight line as mentioned in this paper, and where one variable is independent and the other dependent the regression lines Y on X and X on Y apply.
Abstract: In statistical studies it is frequently convenient to represent a bivariate population (X, Y) by a straight line. Where it is possible to summarize by a straight line and where one variable is independent and the other dependent the regression lines Y on X and X on Y apply. When both variables are independent or a single summarizing line is desired, a new solution is sought. The line most frequently used is the bivariate line of organic correlation. This line passes through the means of both variates; the magnitude of the slope is ratio of the standard deviations, its sign being like that of the correlation coefficient. A degenerate case occurs for zero correlation coefficient.