scispace - formally typeset
Search or ask a question

Showing papers in "Biometrika in 1998"


Journal ArticleDOI
TL;DR: In this paper, the posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the EM algorithm using the multivariate probit model.
Abstract: SUMMARY This paper provides a practical simulation-based Bayesian and non-Bayesian analysis of correlated binary data using the multivariate probit model. The posterior distribution is simulated by Markov chain Monte Carlo methods and maximum likelihood estimates are obtained by a Monte Carlo version of the EM algorithm. A practical approach for the computation of Bayes factors from the simulation output is also developed. The methods are applied to a dataset with a bivariate binary response, to a four-year longitudinal dataset from the Six Cities study of the health effects of air pollution and to a sevenvariate binary response dataset on the labour supply of married women from the Panel Survey of Income Dynamics.

782 citations


Journal ArticleDOI
TL;DR: A predictive criterion where the goal is good prediction of a replicate of the observed data but tempered by fidelity to the observed values is proposed, which is obtained by minimising posterior loss for a given model.
Abstract: SUMMARY Model choice is a fundamental and much discussed activity in the analysis of datasets. Nonnested hierarchical models introducing random effects may not be handled by classical methods. Bayesian approaches using predictive distributions can be used though the formal solution, which includes Bayes factors as a special case, can be criticised. We propose a predictive criterion where the goal is good prediction of a replicate of the observed data but tempered by fidelity to the observed values. We obtain this criterion by minimising posterior loss for a given model and then, for models under consideration, selecting the one which minimises this criterion. For a broad range of losses, the criterion emerges as a form partitioned into a goodness-of-fit term and a penalty term. We illustrate its performance with an application to a large dataset involving residential property transactions.

750 citations


Journal ArticleDOI
TL;DR: In this article, a minimum divergence estimation method is developed for robust parameter estimation, which uses new density-based divergences which avoid the use of nonparametric density estimation and associated complications such as bandwidth selection.
Abstract: A minimum divergence estimation method is developed for robust parameter estimation. The proposed approach uses new density-based divergences which, unlike existing methods of this type such as minimum Hellinger distance estimation, avoid the use of nonparametric density estimation and associated complications such as bandwidth selection. The proposed class of ‘density power divergences’ is indexed by a single parameter α which controls the trade-off between robustness and efficiency. The methodology affords a robust extension of maximum likelihood estimation for which α = 0. Choices of α near zero afford considerable robustness while retaining efficiency close to that of maximum likelihood.

701 citations


Journal ArticleDOI
TL;DR: In this article, the authors considered nonparametric estimation in a varying coefficient model with repeated measurements, where the measurements are assumed to be independent for different subjects but can be correlated at different time points within each subject.
Abstract: This paper considers nonparametric estimation in a varying coefficient model with repeated measurements (Y ij , X ij , t ij ), for i = 1 n and j = 1 n i , where X ij = (X ij .,,X ijk ) T and (Y ij , X ij , t ij ) denote the jth outcome, covariate and time design points, respectively, of the ith subject. The model considered here is Y ij = X ij T β(t ij )+ e i (t ij ), where β(t)=(β 0 (t)., β k (t)) T , for k≥ 0, are smooth nonparametric functions of interest and e i (t) is a zero-mean stochastic process. The measurements are assumed to be independent for different subjects but can be correlated at different time points within each subject. Two nonparametric estimators of β(t), namely a smoothing spline and a locally weighted polynomial, are derived for such repeatedly measured data. A crossvalidation criterion is proposed for the selection of the corresponding smoothing parameters. Asymptotic properties, such as consistency, rates of convergence and asymptotic mean squared errors, are established for kernel estimators, a special case of the local polynomials. These asymptotic results give useful insights into the reliability of our general estimation methods. An example of predicting the growth of children born to HIV infected mothers based on gender, HIV status and maternal vitamin A levels shows that this model and the corresponding nonparametric estimators are useful in epidemiological studies.

652 citations


Journal ArticleDOI
TL;DR: Fan et al. as mentioned in this paper proposed an adaptive method for estimating the conditional variance by applying a local linear regression to the squared residuals of a nonlinear time series model, which can be used to adapt an automatic bandwidth selection scheme.
Abstract: Author(s): Fan, Jianqing; Yao, Qiwei | Abstract: Conditional heteroscedasticity has been often used in modelling and understanding the variability of statistical data Under a general setup which includes the nonlinear time series model as a special case, we propose an e cient and adaptive method for estimating the conditional variance The basic idea is to apply a local linear regression to the squared residuals We demonstrate that without knowing the regression function, we can estimate the conditional variance asymptotically as well as if the regression were given This asymptotic result, established under the assumption that the observations are made from a strictly stationary and absolutely regular process, is also veri ed via simulation Further, the asymptotic result paves the way for adapting an automatic bandwidth selection scheme An application with nancial data illustrates the usefulness of the proposed techniques

404 citations


Journal ArticleDOI
TL;DR: This parameter-expanded Ei M, PX-EM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis.
Abstract: SUMMARY The EM algorithm and its extensions are popular tools for modal estimation but ar-e often criticised for their slow convergence. We propose a new method that can often make EM much faster. The intuitive idea is to use a 'covariance adjustment' to correct the analysis of the M step, capitalising on extra information captured in the imputed complete data. The way we accomplish this is by parameter expansion; we expand the complete-data model while preserving the observed-data model and use the expanded complete-data model to generate EM. This parameter-expanded Ei M, PX-EM, algorithm shares the simplicity and stability of ordinary EM, but has a faster rate of convergence since its M step performs a more efficient analysis. The PX-EM algorithm is illustrated for the multivariate t distribution, a random effects model, factor analysis, probit regression and a Poisson imaging model.

393 citations


Journal ArticleDOI
TL;DR: A stochastic search form of classification and regression tree (CART) analysis is proposed, motivated by a Bayesian model and an approximation to a probability distribution over the space of possible trees is explored.
Abstract: A stochastic search form of classification and regression tree (CART) analysis (Breiman et al., 1984) is proposed, motivated by a Bayesian model. An approximation to a probability distribution over the space of possible trees is explored using reversible jump Markov chain Monte Carlo methods (Green, 1995).

325 citations


Journal ArticleDOI
TL;DR: In this article, Doubly stochastic Bayesian hierarchical models are introduced to account for uncertainty and spatial variation in the underlying intensity measure for point process models, which are applied to a problem in forest ecology.
Abstract: SUMMARY Doubly stochastic Bayesian hierarchical models are introduced to account for uncertainty and spatial variation in the underlying intensity measure for point process models. Inhomogeneous gamma process random fields and, more generally, Markov random fields with infinitely divisible distributions are used to construct positively autocorrelated intensity measures for spatial Poisson point processes; these in turn are used to model the number and location of individual events. A data augmentation scheme and Markov chain Monte Carlo numerical methods are employed to generate samples from Bayesian posterior and predictive distributions. The methods are developed in both continuous and discrete settings, and are applied to a problem in forest ecology.

275 citations


Journal ArticleDOI
TL;DR: In this article, a Bayesian hierarchical model for multiple shrinkage estimation in wavelets is proposed, with a focus on easy-to-compute analytic approximations as well as importance sampling and Markov chain Monte Carlo.
Abstract: This paper discusses Bayesian methods for multiple shrinkage estimation in wavelets. Wavelets are used in applications for data denoising, via shrinkage of the coefficients towards zero, and for data compression, by shrinkage and setting small coefficients to zero. We approach wavelet shrinkage by using Bayesian hierarchical models, assigning a positive prior probability to the wavelet coefficients being zero. The resulting estimator for the wavelet coefficients is a multiple shrinkage estimator that exhibits a wide variety of nonlinear patterns. We discuss fast computational implementations, with a focus on easy-to-compute analytic approximations as well as importance sampling and Markov chain Monte Carlo methods. Multiple shrinkage estimators prove to have excellent mean squared error performance in reconstructing standard test functions. We demonstrate this in simulated test examples, comparing various implementations of multiple shrinkage to commonly-used shrinkage rules. Finally, we illustrate our approach with an application to the so-called 'glint' data.

260 citations


Journal ArticleDOI
TL;DR: In this article, a wavelet-vaguelette decomposition method is proposed to estimate the derivative of a function observed subject to noise in the presence of noise, and the performance of various methods are compared through exact risk calculations, in the context of the estimation of the derivative.
Abstract: SUMMARY A wide variety of scientific settings involve indirect noisy measurements where one faces a linear inverse problem in the presence of noise. Primary interest is in some function f(t) but data are accessible only about some linear transform corrupted by noise. The usual linear methods for such inverse problems do not perform satisfactorily when f(t) is spatially inhomogeneous. One existing nonlinear alternative is the wavelet-vaguelette decomposition method, based on the expansion of the unknown f(t) in wavelet series. In the vaguelette-wavelet decomposition method proposed here, the observed data are expanded directly in wavelet series. The performances of various methods are compared through exact risk calculations, in the context of the estimation of the derivative of a function observed subject to noise. A result is proved demonstrating that, with a suitable universal threshold somewhat larger than that used for standard denoising problems, both the wavelet-based approaches have an ideal spatial adaptivity property.

259 citations


Journal ArticleDOI
TL;DR: In this article, the authors consider inference in general binary response regression models under retrospective sampling plans and show that the estimating function obtained from the prospective likelihood is optimal in a class of unbiased estimating functions.
Abstract: SUMMARY We consider inference in general binary response regression models under retrospective sampling plans. Prentice & Pyke (1979) discovered that inference for the odds-ratio parameter in a logistic model can be based on a prospective likelihood even though the sampling scheme is retrospective. We show that the estimating function obtained from the prospective likelihood is optimal in a class of unbiased estimating functions. Also we link casecontrol sampling with a two-sample biased sampling problem, where the ratio of two densities is assumed to take a known parametric form. Connections between this model and the Cox proportional hazards model are pointed out. Large and small sample size behaviour of the proposed estimators is studied.

Journal ArticleDOI
TL;DR: In this paper, it is shown that the conditional least squares estimator of the parameters including the threshold parameter is root-n consistent and asymptotically normally distributed in the continuous threshold autoregressive model.
Abstract: The continuous threshold autoregressive model is a sub-class of the threshold autoregressive model subject to the requirement that the piece-wise linear autoregressive function be continuous everywhere. In contrast with the discontinuous case, it is shown that, under suitable regularity conditions, the conditional least squares estimator of the parameters including the threshold parameter is root-n consistent and asymptotically normally distributed. The theory is illustrated by a simulation study and is applied to the quarterly U.S. unemployment rates.

Journal ArticleDOI
TL;DR: In this article, the asymptotic variance structure of the resulting estimators is provided, and the relative efficiencies of different imputation procedures are compared to compare the relative efficiency of different methods.
Abstract: We consider the asymptotic behaviour of various parametric multiple imputation procedures which include but are not restricted to the 'proper' imputation procedures proposed by Rubin (1978). The asymptotic variance structure of the resulting estimators is provided. This result is used to compare the relative efficiencies of different imputation procedures. It also provides a basis to understand the behaviour of two Monte Carlo iterative estimators, stochastic EM (Celeux & Diebolt, 1985; Wei & Tanner, 1990) and simulated EM (Ruud, 1991). We further develop properties of these estimators when they stop at iteration K with imputation size m. An application to a mcasurement error problem is used to illustrate the results.

Journal ArticleDOI
TL;DR: In this paper, a cross-validation approach for bandwidth selection in the kernel smoothing of distribution functions is proposed, based on unbiased estimation of a mean integrated squared error curve whose minimising value defines an optimal smoothing parameter.
Abstract: SUMMARY Several approaches can be made to the choice of bandwidth in the kernel smoothing of distribution functions. Recent proposals by Sarda (1993) and by Altman & Leger (1995) are analogues of the 'leave-one-out' and 'plug-in' methods which have been widely used in density estimation. In contrast, a method of crossvalidation appropriate to the smoothing of distribution functions is proposed. Selection of the bandwidth parameter is based on unbiased estimation of a mean integrated squared error curve whose minimising value defines an optimal smoothing parameter. This procedure is shown to lead to asymptotically optimal bandwidth choice, not just in the usual first-order sense but also in the second-order sense in which kernel methods improve on the standard empirical distribution function. Some general theory on the performance of optimal, data-based methods of bandwidth choice is also provided, leading to results which do not have analogues in the context of density estimation. The numerical performances of all the methods discussed in the paper are compared. A bandwidth based on a simple reference distribution is also included. Simulations suggest that the crossvalidatory proposal works well, although the simple reference bandwidth is also quite effective.

Journal ArticleDOI
TL;DR: In this paper, it is shown that confidence regions enjoying the same convergence rates as those found for empirical likelihood can be obtained for the entire range of values of the Cressie-Read parameter, including -1, maximum entropy, 0, empirical likelihood, and 1, Pearson's Z2.
Abstract: SUMMARY The method of empirical likelihood can be viewed as one of allocating probabilities to an n-cell contingency table so as to minimise a goodness-of-fit criterion. It is shown that, when the Cressie-Read power-divergence statistic is used as the criterion, confidence regions enjoying the same convergence rates as those found for empirical likelihood can be obtained for the entire range of values of the Cressie-Read parameter ,{, including -1, maximum entropy, 0, empirical likelihood, and 1, Pearson's Z2. It is noted that, in the power-divergence family, empirical likelihood is the only member which is Bartlettcorrectable. However, simulation results suggest that, for the mean, using a scaled F distribution yields more accurate coverage levels for moderate sample sizes.

Journal ArticleDOI
TL;DR: In this article, the authors proposed a semiparametric additive hazards model, which specifies that the hazard function for the failure time associated with a set of possibly time-dependent covariates is the sum of an arbitrary baseline hazard function and a regression function of covariates.
Abstract: Current status data arise when the only knowledge about the failure time of interest is whether the failure occurs before or after a random monitoring time. We propose to analyse such data by the semiparametric additive hazards model, which specifies that the hazard function for the failure time associated with a set of possibly time-dependent covariates is the sum of an arbitrary baseline hazard function and a regression function of covariates. Under certain conditions on the monitoring time, one can make inferences about the regression parameters of the additive hazards model by using the familiar asymptotic theory and software for the proportional hazards model with right censored data. An application to a carcinogenicity experiment is provided.

Journal ArticleDOI
TL;DR: In this article, a natural extension of the conventional accelerated failure time model for survival data is presented to formulate the effects of covariates on the mean function of the counting process for recurrent events.
Abstract: SUMMARY We present a natural extension of the conventional accelerated failure time model for survival data to formulate the effects of covariates on the mean function of the counting process for recurrent events. A class of consistent and asymptotically normal rank estimators is developed for estimating the regression parameters of the proposed model. In addition, a Nelson-Aalen-type estimator for the mean function of the counting process is constructed, which is consistent and, properly normalised, converges weakly to a zeromean Gaussian process. We assess the finite-sample properties of the proposed estimators and the associated inference procedures through Monte Carlo simulation and provide an application to a well-known bladder cancer study.

Journal ArticleDOI
TL;DR: In this paper, a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations is presented, which is simulation based and involves the use of Markov chain Monte Carlo techniques.
Abstract: SUMMARY This paper presents a new methodological approach for carrying out Bayesian inference about dynamic models for exponential family observations. The approach is simulationbased and involves the use of Markov chain Monte Carlo techniques. A MetropolisHastings algorithm is combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes based on sampling from the system disturbances and state parameters separately and in a block are derived and compared. The approach is fully Bayesian in obtaining posterior samples with state parameters and unknown hyperparameters. Illustrations with real datasets with sparse counts and missing values are presented. Extensions to accommodate more general evolution forms and distributions for observations and disturbances are outlined.

Journal ArticleDOI
TL;DR: In this paper, the question of model choice for the class of stationary and nonstationary, fractional and non-fractional autoregressive processes is considered and a version of the Akaike information criterion, AIC, is derived and shown to be of the same general form as for a stationary autoregression process, but with d treated as an additional estimated parameter.
Abstract: SUMMARY The question of model choice for the class of stationary and nonstationary, fractional and nonfractional autoregressive processes is considered. This class is defined by the property that the dth difference, for -2 < d < oo, is a stationary autoregressive process of order p0 < 00. A version of the Akaike information criterion, AIC, for determining an appropriate autoregressive order when d and the autoregressive parameters are estimated simultaneously by a maximum likelihood procedure (Beran, 1995) is derived and shown to be of the same general form as for a stationary autoregressive process, but with d treated as an additional estimated parameter. Moreover, as in the stationary case, this criterion is shown not to provide a consistent estimator of p0. The corresponding versions of the BIC of Schwarz (1978) and the HIC of Hannan & Quinn (1979) are shown to yield consistent estimators of po. The results provide a unified treatment of fractional and nonfractional, stationary and integrated nonstationary autoregressive models.

Journal ArticleDOI
TL;DR: In this paper, a general class of sampling methods without replacement and with unequal probabilities is proposed, which consists of splitting the inclusion probability vector into several new inclusion probability vectors, one of these vectors is chosen randomly; thus, the initial problem is reduced to another sampling problem with unequal probability.
Abstract: SUMMARY A very general class of sampling methods without replacement and with unequal probabilities is proposed. It consists of splitting the inclusion probability vector into several new inclusion probability vectors. One of these vectors is chosen randomly; thus, the initial problem is reduced to another sampling problem with unequal probabilities. This splitting is then repeated on these new vectors of inclusion probabilities; at each step, the sampling problem is reduced to a simpler problem. The simplicity of this technique allows one to generate easily new sampling procedures with unequal probabilities. The splitting method also generalises well-known methods such as the Midzuno method, the elimination procedure and the Chao procedure. Next, a sufficient condition is given in order that a splitting method satisfies the Sen-Yates-Grundy condition. Finally, it is shown that the elimination procedure satisfies the Gabler sufficient condition.

Journal ArticleDOI
TL;DR: In this paper, the authors derive Schwarz's information criterion and two modifications for choosing fixed effects in normal linear mixed models, and apply their results to evaluate a large class of models for repeated neuron area measurements in alcoholic and suicidal patients.
Abstract: In this paper we derive Schwarz's information criterion and two modifications for choosing fixed effects in normal linear mixed models The first modification allows an arbitrary, possibly informative, prior for the parameter of interest Replacing this prior with the normal, unit-information, prior of Kass & Wasserman (1995) and the generalised Cauchy prior of Jeffreys (1961) yields the usual Schwarz criterion and a second modification, respectively Under the null hypothesis, these criteria approximate Bayes factors using the corresponding priors to increased accuracy In regression, the second modification also corresponds asymptotically to the Bayes factors of Zellner & Siow (1980) and O'Hagan (1995), and is similar to the Bayes factor of Berger & Pericchi (1996) In mixed models, the effective sample size term in Schwarz's formula is ambiguous because of correlation between observations We propose an appropriate generalisation of Schwarz's approximation and apply our results to evaluate a large class of models for repeated neuron area measurements in alcoholic and suicidal patients

Journal ArticleDOI
TL;DR: In this note nothing more than the basic ingredient of the uniform random number generator is required for simulating binary data from the most commonly occurring correlation structures.
Abstract: SUMMARY It is important to be able to generate correlated binary data in an efficient, easily programmed manner for, among other things, the generation of large bootstrap samples. In this note nothing more than the basic ingredient of the uniform random number generator is required for simulating binary data from the most commonly occurring correlation structures.

Journal ArticleDOI
TL;DR: In this article, the authors introduce two classes of empirical discrepancy statistics that extend empirical likelihood, and establish simple conditions under which they admit Bartlett adjustment, and show that they admit empirical discrepancy adjustment.
Abstract: SUMMARY We introduce two classes of empirical discrepancy statistics that extend empirical likelihood, and establish simple conditions under which they admit Bartlett adjustment.

Journal ArticleDOI
TL;DR: In this paper, Cheng et al. proposed a modification of Cheng's estimation procedures for the regression parameters for semi-parametric linear transformation models with censored observations and showed that the new proposals perform well, but the original interval estimators may not have correct coverage probabilities when censoring is heavy.
Abstract: Recently Cheng, Wei & Ying (1995, 1997) proposed a class of estimation procedures for semi-parametric linear transformation models with censored observations. When the support of the censoring variable is shorter than that of the failure time, the estimators are asymptotically biased. In this paper, we present a simple modification of Cheng's estimation procedures for the regression parameters. Through extensive numerical studies with practical sample sizes, we find that the new proposals perform well, but the original interval estimators may not have correct coverage probabilities when censoring is heavy. Prediction procedures for the survival probabilities of future subjects are also modified accordingly.

Journal ArticleDOI
TL;DR: A general Bayesian method of comparing models based on the Kullback-Leibler distance between two families of models, one nested within the other, which can judge whether or not a more parsimonious model is appropriate.
Abstract: SUMMARY We propose a general Bayesian method of comparing models. The approach is based on the Kullback-Leibler distance between two families of models, one nested within the other. For each parameter value of a full model, we compute the projection of the model to the restricted parameter space and the corresponding minimum distance. From the posterior distribution of the minimum distance, we can judge whether or not a more parsimonious model is appropriate. We show how the projection method can be implemented for generalised linear model selection and we propose some Markov chain Monte Carlo algorithms for its practical implementation in less tractable cases. We illustrate the method with examples.

Journal ArticleDOI
TL;DR: In this article, a dual sensitivity analysis for matched pairs was developed, focusing on the strength of the relationship between U and the response required to reduce an observed association to non-significance.
Abstract: SUMMARY When a study shows an association between a treatment and a response, before concluding that there is a causal relationship it is useful to assess whether or not an unobserved variable, U, might explain the observed association. Sensitivity analysis clarifies the properties U must have in terms of its relationship to the response and its imbalance in the groups being compared. A substantial literature has investigated the imbalance or association between U and group assignment. This paper develops a dual sensitivity analysis for matched pairs, focusing on the strength of the relationship between U and the response required to reduce an observed association to non-significance. A third simultaneous form of sensitivity analysis models both relationships between U and treatment assignment and U and response. The simultaneous form allows one to compare results from the sensitivity analysis to subject matter knowledge about both relationships. The methods are illustrated by several examples.

Journal ArticleDOI
TL;DR: In this article, a nonparametric estimation of the joint distribution and summaries of survival time and mark variables is presented, which is analogous to the product integral representation of univariate survival function.
Abstract: SUMMARY In many applications, variables of interest are marks of the endpoint which are not observed when the survival time is censored. This paper focuses on nonparametric estimation of the joint distribution and summaries of survival time and mark variables. We establish a representation of the joint distribution function through the cumulative markspecific hazard function, which is analogous to the product integral representation of univariate survival function. We identify a basic data structure common to various applications, propose nonparametric estimators and show that they maximise the likelihood. We formulate the problem in the marked point process framework and study both finite and large-sample properties of the estimators. We show that the joint distribution function estimator is nearly unbiased, uniformly strongly consistent and asymptotically normal. We also derive asymptotic variances for the estimators and propose sample-based variance estimates. Numerical studies demonstrate that both the estimators and their variance estimates perform well for practical sample sizes. We outline an application strategy.

Journal ArticleDOI
TL;DR: In this article, reference priors are derived for three cases where partial information is available: if a subjective conditional prior is given, two reasonable methods are proposed for finding the marginal reference prior and if, instead, a subjective marginal prior is available, a method for defining the conditional reference prior is proposed.
Abstract: SUMMARY In this paper, reference priors are derived for three cases where partial information is available. If a subjective conditional prior is given, two reasonable methods are proposed for finding the marginal reference prior. If, instead, a subjective marginal prior is available, a method for defining the conditional reference prior is proposed. A sufficient condition is then given under which this conditional reference prior agrees with the conditional reference prior derived in the first stage of the reference prior algorithm of Berger & Bernardo (1989, 1992). Finally, under the assumption of independence, a method for finding marginal reference priors is also proposed. Various examples are given to illustrate the methods.

Journal ArticleDOI
TL;DR: An extension to segmented bilinear models in which the expectation matrix is linked to a sum in which each segment has specified row and column covariance matrices as well as a coefficient parameter matrix that is specified only by its rank is proposed.
Abstract: SUMMARY This paper discusses the application of generalised linear methods to bilinear models by criss-cross regression. It proposes an extension to segmented bilinear models in which the expectation matrix is linked to a sum in which each segment has specified row and column covariance matrices as well as a coefficient parameter matrix that is specified only by its rank. This extension includes a variety of biadditive models including the generalised Tukey degree of freedom for non-additivity model that consists of two bilinear segments, one of which is constant. The extension also covers a variety of other models for which least squares fits had not hitherto been available, such as higher-way layouts combined into the rows and columns of a matrix, and a harmonic model which can be reparameterised so a lower rank fit is equivalent to a constant phase parameter. A number of practical applications are provided, including displaying fits by biplots and using them to diagnose models.

Journal ArticleDOI
TL;DR: In this paper, a nonparametric inference for duration times of two successive events is considered, where a new product-limit estimator for the second duration variable and a pathdependent joint survival function estimator are proposed, both modified for the dependent censoring.
Abstract: SUMMARY This paper considers nonparametric inference for duration times of two successive events. Since the second duration process becomes observable only if the first event has occurred, the length of the first duration affects the probability of the second duration being censored. Dependent censoring arises if the two duration times are correlated, which is often the case. Standard approaches to this problem fail because of dependent censoring mechanism. A new product-limit estimator for the second duration variable and a pathdependent joint survival function estimator are proposed, both modified for the dependent censoring. Properties of the estimators are discussed. An example from Lawless (1982) is studied for illustrative purposes as well as a simulation study.