scispace - formally typeset
Search or ask a question
Journal ArticleDOI

On the theory of mortality measurement

01 Jan 1956-Scandinavian Actuarial Journal (Taylor & Francis Group)-Vol. 1956, Iss: 2, pp 125-153
TL;DR: In this paper, the authors have studied the efficiency of various methods of estimating the force of mortality and showed that the most efficient of these is, at least for large samples, the one given by the maximum likelihood method, and the rest of them have to be compared to this best estimate.
Abstract: 2.1. Limitations of the parametric methods. In the previous sections we have studied the efficiency of various methods of estimating the force of mortality. The most efficient of these is, at least for large samples, the one given by the maximum likelihood method, and the rest of them have to be compared to this best estimate. However in this discussion the notion of efficiency is based on the assumption that the mortality intensity can be expressed by Makeham’s formula. How realistic is this in practice?
Citations
More filters
Book ChapterDOI
TL;DR: The analysis of censored failure times is considered in this paper, where the hazard function is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time.
Abstract: The analysis of censored failure times is considered. It is assumed that on each individual arc available values of one or more explanatory variables. The hazard function (age-specific failure rate) is taken to be a function of the explanatory variables and unknown regression coefficients multiplied by an arbitrary and unknown function of time. A conditional likelihood is obtained, leading to inferences about the unknown regression coefficients. Some generalizations are outlined.

28,264 citations


Cites background from "On the theory of mortality measurem..."

  • ...For the monotone hazard case with uncensored data, see Grenander (1956)....

    [...]

BookDOI
01 Jan 2008
TL;DR: Semi-parametric Inference as mentioned in this paper is a well-known technique in empirical process analysis, and it has been used in many applications, e.g., for finite-dimensional and infinite-dimensional parameters.
Abstract: Overview.- An Overview of Empirical Processes.- Overview of Semiparametric Inference.- Case Studies I.- Empirical Processes.- to Empirical Processes.- Preliminaries for Empirical Processes.- Stochastic Convergence.- Empirical Process Methods.- Entropy Calculations.- Bootstrapping Empirical Processes.- Additional Empirical Process Results.- The Functional Delta Method.- Z-Estimators.- M-Estimators.- Case Studies II.- Semiparametric Inference.- to Semiparametric Inference.- Preliminaries for Semiparametric Inference.- Semiparametric Models and Efficiency.- Efficient Inference for Finite-Dimensional Parameters.- Efficient Inference for Infinite-Dimensional Parameters.- Semiparametric M-Estimation.- Case Studies III.

1,141 citations

MonographDOI
01 Jan 2016
TL;DR: This chapter discusses nonparametric statistical models, function spaces and approximation theory, and the minimax paradigm, which aims to provide a model for adaptive inference oflihood-based procedures.
Abstract: 1. Nonparametric statistical models 2. Gaussian processes 3. Empirical processes 4. Function spaces and approximation theory 5. Linear nonparametric estimators 6. The minimax paradigm 7. Likelihood-based procedures 8. Adaptive inference.

534 citations

Journal ArticleDOI
TL;DR: In this paper, the authors present a review of recent developments in nonparametric density estimation and include topics that have been omitted from review articles and books on the subject, such as the histogram, kernel estimators, and orthogonal series estimators.
Abstract: Advances in computation and the fast and cheap computational facilities now available to statisticians have had a significant impact upon statistical research, and especially the development of nonparametric data analysis procedures. In particular, theoretical and applied research on nonparametric density estimation has had a noticeable influence on related topics, such as nonparametric regression, nonparametric discrimination, and nonparametric pattern recognition. This article reviews recent developments in nonparametric density estimation and includes topics that have been omitted from review articles and books on the subject. The early density estimation methods, such as the histogram, kernel estimators, and orthogonal series estimators are still very popular, and recent research on them is described. Different types of restricted maximum likelihood density estimators, including order-restricted estimators, maximum penalized likelihood estimators, and sieve estimators, are discussed, where restrictions are imposed upon the class of densities or on the form of the likelihood function. Nonparametric density estimators that are data-adaptive and lead to locally smoothed estimators are also discussed; these include variable partition histograms, estimators based on statistically equivalent blocks, nearest-neighbor estimators, variable kernel estimators, and adaptive kernel estimators. For the multivariate case, extensions of methods of univariate density estimation are usually straightforward but can be computationally expensive. A method of multivariate density estimation that did not spring from a univariate generalization is described, namely, projection pursuit density estimation, in which both dimensionality reduction and density estimation can be pursued at the same time. Finally, some areas of related research are mentioned, such as nonparametric estimation of functionals of a density, robust parametric estimation, semiparametric models, and density estimation for censored and incomplete data, directional and spherical data, and density estimation for dependent sequences of observations.

520 citations


Cites background from "On the theory of mortality measurem..."

  • ...Grenander (1956) showed that the ML estimator for a nonincreasing density on [0, oo) was a step function with jumps at the order statistics {X(l)}....

    [...]

Journal ArticleDOI
TL;DR: A unifying algorithm for simultaneous estimation of both local FDR and tail area-based FDR is presented that can be applied to a diverse range of test statistics, including p-values, correlations, z- and t-scores.
Abstract: False discovery rate (FDR) methods play an important role in analyzing high-dimensional data. There are two types of FDR, tail area-based FDR and local FDR, as well as numerous statistical algorithms for estimating or controlling FDR. These differ in terms of underlying test statistics and procedures employed for statistical learning. A unifying algorithm for simultaneous estimation of both local FDR and tail area-based FDR is presented that can be applied to a diverse range of test statistics, including p-values, correlations, z- and t-scores. This approach is semipararametric and is based on a modified Grenander density estimator. For test statistics other than p-values it allows for empirical null modeling, so that dependencies among tests can be taken into account. The inference of the underlying model employs truncated maximum-likelihood estimation, with the cut-off point chosen according to the false non-discovery rate. The proposed procedure generalizes a number of more specialized algorithms and thus offers a common framework for FDR estimation consistent across test statistics and types of FDR. In comparative study the unified approach performs on par with the best competing yet more specialized alternatives. The algorithm is implemented in R in the "fdrtool" package, available under the GNU GPL from http://strimmerlab.org/software/fdrtool/ and from the R package archive CRAN.

393 citations


Cites methods from "On the theory of mortality measurem..."

  • ...An further alternative approach is provided by the Grenander density estimator [25]....

    [...]

References
More filters
Journal ArticleDOI
TL;DR: The standard actuarial methods of estimating the age-specific one-year probabilities of death in a given community were developed, for the most part, many years ago-with large bodies of observations in mind as discussed by the authors.
Abstract: The standard actuarial methods of estimating the age-specific one-year probabilities of death in a given community were developed—for the most part, many years ago-with large bodies of observations in mind. Although the familiar “exposed to risk” procedure is known to provide unbiased estimates only when a rather dubious assumption is made about the progression of the instantaneous death-rate (the force of mortality) over the year of age (Cantelli, 1914) it is still the most widely used method of estimation. This is partly because the age-to-age increment in human mortality is relatively small—so that assumptions about its mathematical form are unimportant—and partly because suggested methods of estimation based on more “realistic” assumptions are usually laborious to apply to thousands of observations.

27 citations