scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 1996"


Book
01 Mar 1996
TL;DR: A genetic evaluation with different sources of records and the best linear unbiased prediction of breeding value - univariate models with one random effect, non-additive animal models and dominance relationship matrix animal model for rapid inversion of the dominance matrix epistatis.
Abstract: Part 1 Genetic evaluation with different sources of records: the basic model breeding value prediction from animal own performance breeding value prediction from progeny records breeding value prediction from pedigree breeding value prediction for one trait from another selection index. Part 2 Genetic relationship between relatives: the numerator relationship matrix decomposing the relationship matrix computing inverse of the relationship matrix inverse of the relationship matrix for sizes and maternal grandsires. Part 3 Best linear unbiased prediction of breeding value - univariate models with one random effect: brief theoretical background a model for an animal evaluation (animal model) a sire model reduced animal model animal model with groups. Part 4 Best linear unbiased prediction of breeding value - models with environmental effects: repeatability model models with common environmental effects. Part 5 Best linear unbiased prediction of breeding value - multivariate models: equal design matrices and no missing records canonical transformation equal design matrices with missing records Cholesky transformation unequal design matrices different traits measured on relatives. Part 6 Maternal trait models - animal and reduced animal models: animal model for a maternal trait reduced animal model with maternal effects multivariate maternal animal model. Part 7 Non-additive animal models: dominance relationship matrix animal model with dominance effects method for rapid inversion of the dominance matrix epistatis. Part 8 Solving linear equations: direct inversion iterating on the mixed model equations iterating on the data.

881 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigated the impact of the normality assumption for random effects on their estimates in the linear mixed-effects model and showed that if the distribution of random effects is a finite mixture of normal distributions, then the random effects may be badly estimated if normality is assumed.
Abstract: This article investigates the impact of the normality assumption for random effects on their estimates in the linear mixed-effects model. It shows that if the distribution of random effects is a finite mixture of normal distributions, then the random effects may be badly estimated if normality is assumed, and the current methods for inspecting the appropriateness of the model assumptions are not sound. Further, it is argued that a better way to detect the components of the mixture is to build this assumption in the model and then “compare” the fitted model with the Gaussian model. All of this is illustrated on two practical examples.

566 citations


Journal ArticleDOI
TL;DR: A Bayesian approach is presented using data from previous meta-analyses in the same therapeutic area to formulate a prior distribution for the heterogeneity between trials, and two approaches to estimating relative efficacy are considered.
Abstract: There exists a variety of situations in which a random effects meta-analysis might be undertaken using a small number of clinical trials. A problem associated with small meta-analyses is estimating the heterogeneity between trials. To overcome this problem, information from other related studies may be incorporated into the meta-analysis. A Bayesian approach to this problem is presented using data from previous meta-analyses in the same therapeutic area to formulate a prior distribution for the heterogeneity. The treatment difference parameters are given non-informative priors. Further, related trials which compare one or other of the treatments of interest with a common third treatment are included in the model to improve inference on both the heterogeneity and the treatment difference. Two approaches to estimating relative efficacy are considered, namely a general parametric approach and a method explicit to binary data. The methodology is illustrated using data from 26 clinical trials which investigate the prevention of cirrhosis using beta-blockers and sclerotherapy. Both sources of external information lead to more precise posterior distributions for all parameters, in particular that representing heterogeneity.

532 citations


Journal ArticleDOI
TL;DR: In this paper, the authors investigate the tradeoff between the number of profiles per subject and number of subjects on the statistical accuracy of the estimators that describe the partworth heterogeneity.
Abstract: The drive to satisfy customers in narrowly defined market segments has led firms to offer wider arrays of products and services. Delivering products and services with the appropriate mix of features for these highly fragmented market segments requires understanding the value that customers place on these features. Conjoint analysis endeavors to unravel the value or partworths, that customers place on the product or service's attributes from experimental subjects' evaluation of profiles based on hypothetical products or services. When the goal is to estimate the heterogeneity in the customers' partworths, traditional estimation methods, such as least squares, require each subject to respond to more profiles than product attributes, resulting in lengthy questionnaires for complex, multiattributed products or services. Long questionnaires pose practical and theoretical problems. Response rates tend to decrease with increasing questionnaire length, and more importantly, academic evidence indicates that long questionnaires may induce response biases. The problems associated with long questionnaires call for experimental designs and estimation methods that recover the heterogeneity in the partworths with shorter questionnaires. Unlike more popular estimation methods, Hierarchical Bayes HB random effects models do not require that individual-level design matrices be of full rank, which leads to the possibility of using fewer profiles per subject than currently used. Can this theoretical possibility be practically implemented? This paper tests this conjecture with empirical studies and mathematical analysis. The random effects model in the paper describes the heterogeneity in subject-level partworths or regression coefficients with a linear model that can include subject-level covariates. In addition, the error variances are specific to the subjects, thus allowing for the differential use of the measurement scale by different subjects. In the empirical study, subjects' responses to a full profile design are randomly deleted to test the performance of HB methods with declining sample sizes. These simple experiments indicate that HB methods can recover heterogeneity and estimate individual-level partworths, even when individual-level least squares estimators do not exist due to insufficient degrees of freedom. Motivated by these empirical studies, the paper analytically investigates the trade-off between the number of profiles per subject and the number of subjects on the statistical accuracy of the estimators that describe the partworth heterogeneity. The paper considers two experimental designs: each subject receives the same set of profiles, and subjects receive different blocks of a fractional factorial design. In the first case, the optimal design, subject to a budget constraint, uses more subjects and fewer profiles per subject when the ratio of unexplained, partworth heterogeneity to unexplained response variance is large. In the second case, one can maintain a given level of estimation accuracy as the number of profiles per subject decreases by increasing the number of subjects assigned to each block. These results provide marketing researchers the option of using shorter questionnaires for complex products or services. The analysis assumes that response quality is independent of questionnaire length and does not address the impact of design factors on response quality. If response quality and questionnaire length were, in fact, unrelated, then marketing researchers would still find the paper's results useful in improving the efficiency of their conjoint designs. However, if response quality were to decline with questionnaire length, as the preponderance of academic research indicates, then the option to use shorter questionnaires would become even more valuable.

512 citations


Journal ArticleDOI
TL;DR: It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.
Abstract: In a meta-analysis of a set of clinical trials, a crucial but problematic component is providing an estimate and confidence interval for the overall treatment effect theta. Since in the presence of heterogeneity a fixed effect approach yields an artificially narrow confidence interval for theta, the random effects method of DerSimonian and Laird, which incorporates a moment estimator of the between-trial components of variance sigma B2, has been advocated. With the additional distributional assumptions of normality, a confidence interval for theta may be obtained. However, this method does not provide a confidence interval for sigma B2, nor a confidence interval for theta which takes account of the fact that sigma B2 has to be estimated from the data. We show how a likelihood based method can be used to overcome these problems, and use profile likelihoods to construct likelihood based confidence intervals. This approach yields an appropriately widened confidence interval compared with the standard random effects method. Examples of application to a published meta-analysis and a multicentre clinical trial are discussed. It is concluded that likelihood based methods are preferred to the standard method in undertaking random effects meta-analysis when the value of sigma B2 has an important effect on the overall estimated treatment effect.

471 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived general formulas for the asymptotic bias in regression coefficients and variance components estimated by penalized quasi-likelihood (PQL) in generalized linear mixed models with canonical link function and multiple sets of independent random effects.
Abstract: General formulas are derived for the asymptotic bias in regression coefficients and variance components estimated by penalized quasi-likelihood (PQL) in generalized linear mixed models with canonical link function and multiple sets of independent random effects. Easily computed correction matrices result in variance component estimates that have satisfactory asymptotic behavior for small values of the variance components and significantly reduce bias for larger values. Both first-order and second-order correction procedures are developed for regression coefficients estimated by PQL. The methods are illustrated through an analysis of an experiment on salamander matings involving crossed male and female random effects, and their properties are evaluated in a simulation study.

457 citations


Journal ArticleDOI
TL;DR: MIXOR provides maximum marginal likelihood estimates for mixed-effects ordinal probit, logistic, and complementary log-log regression models, used for analysis of dichotomous and ordinal outcomes from either a clustered or longitudinal design.

403 citations


Journal ArticleDOI
TL;DR: This paper develops a general latent class model with random effects to model the conditional dependence among multiple diagnostic tests (or readers) and develops a graphical method for checking whether or not the unconditional dependence is of concern and for identifying the pattern of the correlation.
Abstract: When the results of a reference (or gold standard) test are missing or not error-free, the accuracy of diagnostic tests is often assessed through latent class models with two latent classes, representing diseased or nondiseased status. Such models, however, require that conditional on the true disease status, the tests are statistically independent, an assumption often violated in practice. Consequently, the model generally fits the data poorly. In this paper, we develop a general latent class model with random effects to model the conditional dependence among multiple diagnostic tests (or readers). We also develop a graphical method for checking whether or not the conditional dependence is of concern and for identifying the pattern of the correlation. Using the random-effects model and the graphical method, a simple adequate model that is easy to interpret can be obtained. The methods are illustrated with three examples from the biometric literature. The proposed methodology is also applicable when the true disease status is indeed known and conditional dependence could well be present.

402 citations


Journal ArticleDOI
TL;DR: This paper modelling a continuous covariate over time and simultaneously relating the covariate to disease risk and the Markov chain Monte Carlo technique of Gibbs sampling is used to estimate the joint posterior distribution of the unknown parameters of the model.
Abstract: Recent methodologic developments in the analysis of longitudinal data have typically addressed one of two aspects: (i) the modelling of repeated measurements of a covariate as a function of time or other covariates, or (ii) the modelling of the effect of a covariate on disease risk. In this paper, we address both of these issues in a single analysis by modelling a continuous covariate over time and simultaneously relating the covariate to disease risk. We use the Markov chain Monte Carlo technique of Gibbs sampling to estimate the joint posterior distribution of the unknown parameters of the model. Simulation studies showed that jointly modelling survival and covariate data reduced bias in parameter estimates due to covariate measurement error and informative censoring. We illustrate the methodology by application to a data set that consists of repeated measurements of the immunologic marker CD4 and times of diagnosis of AIDS for a cohort of anti-HIV-1 positive recipients of anti-HIV-1 positive blood transfusions. We assume a linear random effects model with subject-specific intercepts and slopes and normal errors for the true log and square root CD4 counts, and a proportional hazards model for AIDS-free survival time expressed as a function of current true CD4 value. On the square root scale, the joint approach yielded a mean slope for CD4 that was 7 per cent steeper and a log relative risk of AIDS that was 35 per cent larger than those obtained by analysis of the component sub-models separately.

401 citations


Journal ArticleDOI
TL;DR: MIXREG is a program that provides estimates for a mixed-effects regression model (MRM) for normally-distributed response data including autocorrelated errors, utilizing both the EM algorithm and a Fisher-scoring solution.

249 citations


Book
01 Jan 1996
TL;DR: In this article, the authors present a method for estimating normal error distributions of continuous non-normal measures. But their method is based on a generalized linear model and Maximum Quasi-Likelihood Estimation.
Abstract: Introduction. Normal Error Distributions: Multivariate Analysis of Variance. Univariate Analysis of Variance. Regression Methods. Random Effects Models. Covariance Structures. Non-normal Error Distributions: Continuous Non-Normal Measures. Gaussian Estimation. Nonlinear Models. Generalized Linear Models and Maximum Quasi-Likelihood Estimation. Binary and Categorical Measures. Comparisons of Methods. Data Appendices. References.

Journal ArticleDOI
TL;DR: A set of FORTRAN programs to implement a multiple-trait Gibbs sampling algorithm for (co)variance component inference in animal models (MTGSAM) was developed as discussed by the authors.
Abstract: A set of FORTRAN programs to implement a multiple-trait Gibbs sampling algorithm for (co)variance component inference in animal models (MTGSAM) was developed. The MTGSAM programs are available to the public. The programs support models with correlated genetic effects and arbitrary numbers of covariates, fixed effects, and independent random effects for each trait. Any combination of missing traits is allowed. The programs were used to estimate variance components for 50 replicates of simulated data. Each replicate consisted of 50 animals of each sex in each of four generations, for 400 animals in each replicate for two traits. For MTGSAM, informative prior distributions for variance components were inverted Wishart random variables with 10 df and means equal to the simulation parameters. A total of 15,000 Gibbs sampling rounds were completed for each replicate, with 2,000 rounds discarded for burn-in. For multiple-trait derivative free restricted maximum likelihood (MTDFREML), starting values for the variance components were the simulation parameters. Averages of posterior mean of variance components estimated using MTGSAM with informative and flat prior distributions for variance components and REML estimates obtained using MTDFREML indicated that all three methods were empirically unbiased. Correlations between estimates from MTGSAM using flat priors and MTDFREML all exceeded.99.

Journal ArticleDOI
TL;DR: When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis as mentioned in this paper.
Abstract: When there is publication bias, studies yielding large p values, and hence small effect estimates, are less likely to be published, which leads to biased estimates of effects in meta-analysis. We i...

Journal ArticleDOI
TL;DR: In this paper, the authors argue against use of the fixed effects model because it may lead to misleading conclusions about situational specificity, and propose a random effects model (RE) which provides estimates of how between-study differences influence the relationships under study.
Abstract: Combining statistical information across studies (i.e., meta-analysis) is a standard research tool in applied psychology. The most common meta-analytic approach in applied psychology, the fixed effects approach, assumes that individual studies are homogeneous and are sampled from the same population. This model assumes that sampling error alone explains the majority of observed differences in study effect sizes and its use has lead some to challenge the notion of situational specificity in favor of validity generalization. We critique the fixed effects methodology and propose an advancement–the random effects model (RE) which provides estimates of how between-study differences influence the relationships under study. RE models assume that studies are heterogeneous since they are often conducted by different investigators under different settings. Parameter estimates of both models are compared and evidence in favor of the random effects approach is presented. We argue against use of the fixed effects model because it may lead to misleading conclusions about situational specificity.

Journal ArticleDOI
TL;DR: In this paper, the authors developed an alternative approach based on a flexible family of models for which both the fixed and the random effects are linear combinations of B-splines, which allows estimates of each individual's smooth trajectory over time to be exhibited.
Abstract: SUMMARY In this paper we analyse CD4 counts from infants born to mothers who are infected with the human immunodeficiency virus. A random effects model with linear or low order polynomials in time is unsatisfactory for these longitudinal data We develop an alternative approach based on a flexible family of models for which both the fixed and the random effects are linear combinations of B-splines. The fixed and random parts are smooth functions of time and the covariance structure is parsimonious. The procedure allows estimates of each individual's smooth trajectory over time to be exhibited. Model selection, estimation and computation are discussed. Centile curves are presented that take into account the longitudinal nature of the data. We emphasize a graphical approach to the presentation of results.

Journal ArticleDOI
TL;DR: In this paper, three alternative estimation procedures based on the EM algorithm are considered, two of them make use of numerical integration techniques (Gauss-Hermite or Monte Carlo), and the third one is a EM type algorithm based on posterior modes.

Journal ArticleDOI
TL;DR: In this paper, an EM algorithm for exact maximum likelihood estimation of the population parameters for nonlinear random effects models was introduced, which can account for both within-and between-individual sources of variability and serial correlation within individual observations when analyzing unbalanced repeated measures data.
Abstract: The pharmaceutical industry is currently interested in the population approach and population models, also known as mixed effects models and random effects models depending on the precise form. Population models are useful in that they can account for both withinand between-individual sources of variability and serial correlation within individual observations when analyzing unbalanced repeated measures data. The modelling of population pharmacodynamic or pharmacokinetic profiles typically involves nonlinear random effects models. Each individual's observations are modelled by identical (up to unknown parameter values) nonlinear regression models, with the distribution of the observations, or a transformation of the observations, about expected responses taken to be normal, with the degree of variability described by a variance model. Between-individual variability is modelled by a population distribution for the individual regression parameter values (random effects). In a parametric analysis the population distribution is taken to be normal, the parameters of which, along with the parameters of the variance model, are known as the population parameters. Maximum likelihood estimation of the population parameters for nonlinear random effects models was pioneered by Beal and Sheiner (1979), and since then a number of algorithms have appeared for approximate maximum likelihood, including Steimer et al. (1984), Lindstrom and Bates (1990), Beal and Sheiner (1992), and Mentre and Gomeni (1995). All of these algorithms are approximate in some way. For a summary see Beal and Sheiner (1992), Wolfinger (1993), Pinheiro and Bates (1994), and Davidian and Giltinan (1995). In this paper an EM algorithm for exact maximum likelihood estimation is introduced. An EM algorithm obtaining maximum likelihood estimates for linear random effects models was introduced by Dempster, Laird, and Rubin (1977). Laird and Ware (1982), Lindstrom and Bates (1988), Jennrich and Schluchter (1986), and Liu and Rubin (1994) all describe hybrid EM algorithms for the linear random effects model. A true EM algorithm for the linear model is described by Jamshidian and Jennrich (1993). Mentre and Gomeni (1995) describe an approximate EM algorithm for nonlinear random effects models and, from the algorithm given in this paper, it can be seen clearly how their approximations arise. The present algorithm uses Monte Carlo methods to perform the E step, a strategy previously adopted in an altogether different model by Guo and Thompson (1994). Guo and Thompson require a Gibbs sampler, that is, a Markov chain Monte Carlo method for their E step, but the present algorithm uses independent samples. In Section 2 of this paper the nonlinear random effects model is described. Section 3 gives the EM algorithm without random effect covariates, while Section 4 gives the modified algorithm in the

Journal ArticleDOI
TL;DR: The proposed model is applied to birth defects data, where continuous data on the size of infants who were exposed to anticonvulsant medications in utero are compared to controls.
Abstract: We discuss latent variable models that allow for fixed effect covariates, as well as covariates affecting the latent variable directly. Restricted maximum likelihood and maximum likelihood are used to estimate model parameters. A generalized likelihood ratio test can be used to test significance of the covariates effecting the latent outcomes. Special cases of the proposed model correspond to factor analysis, mixed models, random effects models, and simultaneous equations. The model is applied to birth defects data, where continuous data on the size of infants who were exposed to anticonvulsant medications in utero are compared to controls.

Journal ArticleDOI
TL;DR: The possibility of modelling the sampling design using fixed and random effects to redefine target parameters, improve estimators of standard target parameters and improve standard variance estimators is investigated.
Abstract: Health surveys typically have stratified multistage clustered designs in which individuals are sampled with differing probabilities The sampling design is taken into account in a classical survey analysis by using sample-weighted estimators and variance estimators calculated at the primary-sampling-unit level In this paper we investigate the possibility of modelling the sampling design using fixed and random effects to redefine target parameters, improve estimators of standard target parameters and improve standard variance estimators References in which this type of additional modelling was used in health surveys are given The problem of estimating population variance components is discussed in some detail, with an application involving estimation of between- and within-family variance components in the Hispanic Health and Nutrition Examination Survey

Journal ArticleDOI
TL;DR: In this article, a Bayesian hierarchical model for LHS inhaler compliance was proposed, incorporating individual-level random effects to account for correlations among repeated measures on the same participant, which enables assessment of the relationships among visit attendance, canister return, self-reported compliance level, and canister weight compliance.
Abstract: In the Lung Health Study (LHS), compliance with the use of inhaled medication was assessed at each follow-up visit both by self-report and by weighing the used medication canisters. One or both of these assessments were missing if the participant failed to attend the visit or to return all canisters. Approximately 30% of canister-weight data and 5% to 15% of self-report data were missing at different visits. We use Gibbs sampling with data augmentation and a multivariate Hastings update step to implement a Bayesian hierarchical model for LHS inhaler compliance. Incorporating individual-level random effects to account for correlations among repeated measures on the same participant, our model is a longitudinal extension of the Tobit models used in econometrics to deal with partially unobservable data. It enables (a) assessment of the relationships among visit attendance, canister return, self-reported compliance level, and canister weight compliance, and (b) determination of demographic, physiolog...

Journal ArticleDOI
TL;DR: These simulated meta-analyses demonstrate the main point, which is that the time of first significance, however parameterized, is itself a random variable with error variance.

Journal ArticleDOI
TL;DR: The power of the Wald test depends on the parameterization used, however, and a whole family of Wald statistics with p values ranging from 0 to 1 can be generated with power transformations of the random effect parameter as mentioned in this paper.
Abstract: Computer programs often produce a parameter estimate and estimated variance (). Thus it is easy to compute a Wald statistic (- θ0){()}−1/2 to test the null hypothesis θ = θ0. Hauck and Donner and Vaeth have identified situations in which the Wald statistic has poor power. We consider another example that is not in the classes discussed by those authors. We present data from a balanced one-way random effects analysis of variance (ANOVA) that illustrate the poor power of the Wald statistic compared to the usual F test. In this example the parameter of interest is the variance of the random effect. The power of the Wald test depends on the parameterization used, however, and a whole family of Wald statistics with p values ranging from 0 to 1 can be generated with power transformations of the random effect parameter.

Journal ArticleDOI
TL;DR: In this paper, a negative multinomial regression model for clustered count data is proposed, which makes explicit allowance for correlated observations by subjecting the multiple counts in the same cluster to a cluster-specific random effect.
Abstract: In this paper, I consider a negative multinomial regression model for clustered count data. One example of such data is quarterly counts of a surgical procedure performed from 1988 through 1991 in a national sample of 175 US. hospitals. In this data set, each hospital contributes a number of quarterly counts. The quarterly counts contributed by the same hospital form a cluster. The potential problem for clustered count data is that the multiple counts in the same cluster may not be independent. When they are not independent, Poisson and mixed Poisson models for overdispersed count data including negative binomial models are inappropriate. In contrast, the negative multinomial regression model makes explicit allowance for correlated observations by subjecting the multiple counts in the same cluster to a cluster-specific random effect. A gamma-distributed cluster-specific effect in this formulation leads to the negative multinomial regression model. I describe maximum likelihood estimation and illustrate the model through examples.

Journal ArticleDOI
TL;DR: In this article, a family of distributional models for failure-time data that accounts for multiple levels of clustering and reduces in the case of a single clustering level to a univariate frailty model is proposed.
Abstract: SUMMARY In recent years substantial research has been devoted to developing failure-time methodology which accounts for possible dependency between observations. An example is the univariate frailty model (Vaupel, Manton & Stallard, 1979), which incorporates an exchangeable dependence structure by the inclusion of cluster-specific random effects. In some studies it may be reasonable to expect more than one level of within-cluster association: for instance, association between a parent and child versus that between two siblings in studies of familial disease aggregation, or association between two village residents who live in different households versus that between residents of the same household in intervention studies. We propose a family of distributional models for failure-time data that accounts for multiple levels of clustering and reduces in the case of a single clustering level to a univariate frailty model. The resulting survival functions are constructed by a recursive nesting of univariate frailty-type distributions through which archimedean copula forms are determined for all bivariate margins. Properties of the proposed model are developed, illustrated and briefly contrasted with multivariate frailty model properties. In conclusion, we outline the application of our model to marginal risk regression problems.

Book ChapterDOI
01 Jan 1996
TL;DR: In the analysis of panel data, it is customary to pool the observations with or without separate intercept terms assuming that the slope coefficients do not vary across the cross sections or over time.
Abstract: In the analysis of panel data it is customary to pool the observations with or without separate intercept terms assuming that the slope coefficients do not vary across the cross sections or over time.1 The separate intercepts can be assumed to be fixed (fixed effects models, see Chapter 3) or random (random effects or variance components models, see Chapter 4). These models account for heterogeneity in the intercept term only.

Journal ArticleDOI
TL;DR: Compared with the RE model, the Bayesian methods are demonstrated to be relatively robust against a wide choice of specifications of such information on heterogeneity, and allow for more detailed and satisfactory statements to be made, not only about the overall risk but about the individual studies, on the basis of the combined information.

Journal ArticleDOI
TL;DR: Empirical Bayes estimators based on the two models are judged according to their ability to provide parameter estimates in a Cox model predicting clinical outcomes.
Abstract: In this paper we consider the choice of model used in estimation of trajectories of CD4 T-cell counts by empirical Bayes estimators. Tsiatis et al. have demonstrated that empirical Bayes estimates of CD4 values correct for the bias resulting from measurement error when using CD4 as a covariate in a Cox model to predict clinical events. Here, empirical Bayes estimates from a random effects model are compared to estimates from the more general stochastic regression model presented in Taylor et al. Empirical Bayes estimators based on the two models are judged according to their ability to provide parameter estimates in a Cox model predicting clinical outcomes. Data from ACTG 118 are used as an illustration.

Journal ArticleDOI
TL;DR: In this article, the authors reformulate the Hausman test and find that it incorporates and tests only a limited set of moment restrictions, which has power toward additional sources of model misspecification.

Journal ArticleDOI
TL;DR: In this paper, the mixed effects model for binary responses was extended to accommodate ordinal responses in general and discrete time survival data with ordinal response in particular, and a Newton-Raphson estimation procedure was proposed without resorting to numerical, approximation-based or Monte Carlo integration techniques.
Abstract: The mixed effects model for binary responses due to Conaway (1990, A Random Effects Model for Binary Data) is extended to accommodate ordinal responses in general and discrete time survival data with ordinal responses in particular. Given a multinomial likelihood, cumulative complementary log-log link function, and log-gamma random effects distribution, the resulting marginal likelihood has a closed form. As a result, a Newton-Raphson estimation procedure is feasible without resorting to numerical, approximation-based, or Monte Carlo integration techniques. The parameters in the model have a proportional hazards interpretation in terms of multivariate discrete time data with ordinal responses. Using data from a psychological example, the proposed method is compared with other mixed effects approaches as well as population-averaged models.

Journal ArticleDOI
TL;DR: A piecewise linear random effects model is proposed for analyzing longitudinal data where the multivariate outcome can depend upon time spent on treatment, and allows either a pragmatic or explanatory analysis.
Abstract: In a randomized longitudinal clinical trial designed to evaluate two or more rival treatments, an intent-to-treat analysis requires inclusion of all randomized patients, regardless of whether they remain on protocol for the duration of the study. We propose a piecewise linear random effects model for analyzing longitudinal data where the multivariate outcome can depend upon time spent on treatment. The model assumes that data are available on a random sample of subjects after treatment is terminated, and allows either a pragmatic or explanatory analysis (as defined by Schwartz and Lellouch, 1967, Journal of Chronic Diseases 20, 637-648). Full maximum likelihood estimation of the model parameters is carried out using widely available statistical software for repeated measures with missing data and for nonparametric survival curve estimation. Data from a national, multicenter pediatric AIDS clinical trial are analyzed to illustrate implementation and interpretation of the model.