scispace - formally typeset
Search or ask a question

Showing papers on "Mixed model published in 1999"


Journal ArticleDOI
01 Jun 1999-Ecology
TL;DR: A number of considerations related to choosing methods for the meta-analysis of ecological data, including the choice of parametric vs. resampling methods, reasons for conducting weighted analyses where possible, and comparisons fixed vs. mixed models in categorical and regression-type analyses are outlined.
Abstract: Meta-analysis is the use of statistical methods to summarize research findings across studies. Special statistical methods are usually needed for meta-analysis, both because effect-size indexes are typically highly heteroscedastic and because it is desirable to be able to distinguish between-study variance from within-study sampling-error variance. We outline a number of considerations related to choosing methods for the meta-analysis of ecological data, including the choice of parametric vs. resampling methods, reasons for conducting weighted analyses where possible, and comparisons fixed vs. mixed models in categorical and regression-type analyses.

954 citations


Journal ArticleDOI
TL;DR: In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure, which allows coherent and flexible empirical model building in complex situations.
Abstract: In designed experiments and in particular longitudinal studies, the aim may be to assess the effect of a quantitative variable such as time on treatment effects. Modelling treatment effects can be complex in the presence of other sources of variation. Three examples are presented to illustrate an approach to analysis in such cases. The first example is a longitudinal experiment on the growth of cows under a factorial treatment structure where serial correlation and variance heterogeneity complicate the analysis. The second example involves the calibration of optical density and the concentration of a protein DNase in the presence of sampling variation and variance heterogeneity. The final example is a multienvironment agricultural field experiment in which a yield-seeding rate relationship is required for several varieties of lupins. Spatial variation within environments, heterogeneity between environments and variation between varieties all need to be incorporated in the analysis. In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure. The key result that allows coherent and flexible empirical model building in complex situations is the linear mixed model representation of the cubic smoothing spline. An extension is proposed in which trend is partitioned into smooth and nonsmooth components. Estimation and inference, the analysis of the three examples and a discussion of extensions and unresolved issues are also presented.

594 citations


Journal ArticleDOI
TL;DR: A new methodology based on mixed linear models for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects ofQTLs, as well as unbiased predicted values for QE interactions.
Abstract: A new methodology based on mixed linear models was developed for mapping QTLs with digenic epistasis and QTL×environment (QE) interactions. Reliable estimates of QTL main effects (additive and epistasis effects) can be obtained by the maximum-likelihood estimation method, while QE interaction effects (additive×environment interaction and epistasis×environment interaction) can be predicted by the-best-linear-unbiased-prediction (BLUP) method. Likelihood ratio and t statistics were combined for testing hypotheses about QTL effects and QE interactions. Monte Carlo simulations were conducted for evaluating the unbiasedness, accuracy, and power for parameter estimation in QTL mapping. The results indicated that the mixed-model approaches could provide unbiased estimates for both positions and effects of QTLs, as well as unbiased predicted values for QE interactions. Additionally, the mixed-model approaches also showed high accuracy and power in mapping QTLs with epistatic effects and QE interactions. Based on the models and the methodology, a computer software program (QTLMapper version 1.0) was developed, which is suitable for interval mapping of QTLs with additive, additive×additive epistasis, and their environment interactions.

563 citations


Journal Article
TL;DR: In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure, which allows coherent and flexible empirical model building in complex situations.
Abstract: In designed experiments and in particular longitudinal studies, the aim may be to assess the effect of a quantitative variable such as time on treatment effects. Modelling treatment effects can be complex in the presence of other sources of variation. Three examples are presented to illustrate an approach to analysis in such cases. The first example is a longitudinal experiment on the growth of cows under a factorial treatment structure where serial correlation and variance heterogeneity complicate the analysis. The second example involves the calibration of optical density and the concentration of a protein DNase in the presence of sampling variation and variance heterogeneity. The final example is a multienvironment agricultural field experiment in which a yield-seeding rate relationship is required for several varieties of lupins. Spatial variation within environments, heterogeneity between environments and variation between varieties all need to be incorporated in the analysis. In this paper, the cubic smoothing spline is used in conjunction with fixed and random effects, random coefficients and variance modelling to provide simultaneous modelling of trends and covariance structure. The key result that allows coherent and flexible empirical model building in complex situations is the linear mixed model representation of the cubic smoothing spline. An extension is proposed in which trend is partitioned into smooth and non-smooth components. Estimation and inference, the analysis of the three examples and a discussion of extensions and unresolved issues are also presented.

273 citations


Journal ArticleDOI
TL;DR: Issues in estimating population size N with capture-recapture models when there is variable catchability among subjects are examined and a logistic-normal mixed model is examined, for which the logit of the probability of capture is an additive function of a random subject and a fixed sampling occasion parameter.
Abstract: We examine issues in estimating population size N with capture-recapture models when there is variable catchability among subjects. We focus on a logistic-normal mixed model, for which the logit of the probability of capture is an additive function of a random subject and a fixed sampling occasion parameter. When the probability of capture is small or the degree of heterogeneity is large, the log-likelihood surface is relatively flat and it is difficult to obtain much information about N. We also discuss a latent class model and a log-linear model that account for heterogeneity and show that the log-linear model has greater scope. Models assuming homogeneity provide much narrower intervals for N but are usually highly overly optimistic, the actual coverage probability being much lower than the nominal level.

180 citations


Journal ArticleDOI
TL;DR: This paper constructs several MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models, and exploits an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest.
Abstract: Markov chain Monte Carlo (MCMC) algorithms have revolutionized Bayesian practice. In their simplest form (i.e., when parameters are updated one at a time) they are, however, often slow to converge when applied to high-dimensional statistical models. A remedy for this problem is to block the parameters into groups, which are then updated simultaneously using either a Gibbs or Metropolis-Hastings step. In this paper we construct several (partially and fully blocked) MCMC algorithms for minimizing the autocorrelation in MCMC samples arising from important classes of longitudinal data models. We exploit an identity used by Chib (1995) in the context of Bayes factor computation to show how the parameters in a general linear mixed model may be updated in a single block, improving convergence and producing essentially independent draws from the posterior of the parameters of interest. We also investigate the value of blocking in non-Gaussian mixed models, as well as in a class of binary response data longitudinal models. We illustrate the approaches in detail with three real-data examples.

174 citations


Journal ArticleDOI
TL;DR: In this paper, the authors consider a Bayesian hierarchical linear mixed model where the fixed effects have a vague prior such as a constant prior and the random effect follows a class of CAR(1) models including those whose joint prior distribution of the regional effects is improper.
Abstract: SUMMARY We examine properties of the conditional autoregressive model, or CAR( 1) model, which is commonly used to represent regional effects in Bayesian analyses of mortality rates. We consider a Bayesian hierarchical linear mixed model where the fixed effects have a vague prior such as a constant prior and the random effect follows a class of CAR(1) models including those whose joint prior distribution of the regional effects is improper. We give sufficient conditions for the existence of the posterior distribution of the fixed and random effects and variance components. We then prove the necessity of the conditions and give a one-way analysis of variance example where the posterior may or may not exist. Finally, we extend the result to the generalised linear mixed model, which includes as a special case the Poisson log-linear model commonly used in disease mapping.

168 citations


Journal ArticleDOI
TL;DR: The spline model provides greater flexibility at the cost of additional computation and is shown to be capable of picking up features of the lactation curve that are missed by the random regression model.

158 citations


Journal ArticleDOI
TL;DR: Further elaboration of the statistical tools to analyse FA should focus on the usefulness of the method, in order for the correct statistical approaches to be applied more regularly.
Abstract: The unbiased estimation of fluctuating asymmetry (FA) requires independent repeated measurements on both sides. The statistical analysis of such data is currently performed by a two-way mixed ANOVA analysis. Although this approach produces unbiased estimates of FA, many studies do not utilize this method. This may be attributed in part to the fact that the complete analysis of FA is very cumbersome and cannot be performed automatically with standard statistical software. Therefore, further elaboration of the statistical tools to analyse FA should focus on the usefulness of the method, in order for the correct statistical approaches to be applied more regularly. In this paper we propose a mixed regression model with restricted maximum likelihood (REML) parameter estimation to model FA. This routine yields exactly the same estimates of FA as the two-way mixed ANOVA. Yet the advantages of this approach are that it allows (a) testing the statistical significance of FA, (b) modelling and testing heterogeneity in both FA and measurement error (ME) among samples, (c) testing for nonzero directional asymmetry and (d) obtaining unbiased estimates of individual FA levels. The switch from a mixed two-way ANOVA to a mixed regression model was made to avoid overparametrization. Two simulation studies are presented. The first shows that a previously proposed method to test the significance of FA is incorrect, contrary to our mixed regression approach. In the second simulation study we show that a traditionally applied measure of individual FA [abs(left – right)] is biased by ME. The proposed mixed regression method, however, produces unbiased estimates of individual FA after modelling heterogeneity in ME. The applicability of this method is illustrated with two analyses.

151 citations


Journal ArticleDOI
TL;DR: A new computing technique feasible in Jacobi and conjugate gradient based iterative methods using iteration on data is presented and good performance was due to fast computing time per iteration and quick convergence to the final solutions.

137 citations


Journal ArticleDOI
TL;DR: In this paper, a mixed model framework is used to fit the most common stability models using the MIXED procedure of the SAS system, which allows unbalanced data to be handled.
Abstract: Multienvironment trials are often analyzed to assess the yield stability of genotypes. Most of the common stability measures correspond to parameters of a mixed model with fixed genotypes and random environments. Analysis within a mixed model framework allows unbalanced data to be handled. This note shows how to fit the most common stability models using the MIXED procedure of the SAS system.

Journal ArticleDOI
TL;DR: In this article, the authors considered nonparametric factorial designs for multivariate observations under the framework of general rank score statistics, and the results were applied to a two-way mixed model assuming compound symmetry and to a factorial design for longitudinal data.

Journal ArticleDOI
TL;DR: In this paper, a more complex degree of freedom method based on Satterthwaite's technique of matching moments is investigated, and the resulting mixed-model F-tests are compared with a Welch-James-type test which has been found to be generally robust to assumption violations.
Abstract: Mixed-model analysis is the newest approach to the analysis of repeated measurements. The approach is supposed to be advantageous (i.e., efficient and powerful) because it allows users to model the covariance structure of their data prior to assessing treatment effects. The statistics for this method are based on an F-distribution with degrees of freedom often just approximated by the residual degrees of freedom. However, previous results indicated that these statistics can produce biased Type I error rates under conditions believed to characterize behavioral science research, This study investigates a more complex degrees of freedom method based on Satterthwaite's technique of matching moments. The resulting mixed-model F-tests are compared with a Welch-James-type test which has been found to be generally robust to assumption violations. Simulation results do not universally favor one approach over the other, although additional considerations are discussed outlining the relative merits of each approach.

Journal ArticleDOI
TL;DR: In this paper, the authors examined two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation.
Abstract: We examine two strategies for meta-analysis of a series of 2 x 2 tables with the odds ratio modelled as a linear combination of study level covariates and random effects representing between-study variation. Penalized quasi-likelihood (PQL), an approximate inference technique for generalized linear mixed models, and a linear model fitted by weighted least squares to the observed log-odds ratios are used to estimate regression coefficients and dispersion parameters. Simulation results demonstrate that both methods perform adequate approximate inference under many conditions, but that neither method works well in the presence of highly sparse data. Under certain conditions with small cell frequencies the PQL method provides better inference.

Journal ArticleDOI
TL;DR: In this article, the Welch-James type test described by Keselman et al. was used to compare the performance of repeated measures with a variety of analysis strategies, including corrected degrees of freedom univariate tests, multivariate test, mixed model tests, and tests due to Keselman, Carriere & Lix, Algina, Huynh and Lecoutre.
Abstract: Looney & Stanley's (1989) recommendations regarding analysis strategies for repeated measures designs containing between-subjects grouping variables and within-subjects repeated measures variables were re-examined and compared to recent analysis strategies. That is, corrected degrees of freedom univariate tests, multivariate tests, mixed model tests, and tests due to Keselman, Carriere & Lix (1993) and to Algina (1994), Huynh (1978) and Lecoutre (1991) were compared for rates of Type I error in unbalanced non-spherical repeated measures designs having varied covariance structures and no missing data on the within-subjects variable. Heterogeneous within-subjects and heterogeneous within- and between-subjects structures were investigated along with multivariate non-normality. Results indicated that the tests due to Keselman et al. and Algina, Huynh and Lecoutre provided effective Type I error control whereas the default mixed model approach computed with PROC MIXED (SAS Institute, 1995) generally did not. Based on power differences, we recommend that applied researchers adopt the Welch-James type test described by Keselman et al.

Journal ArticleDOI
TL;DR: A semiparametric mixed effects regression model is proposed for the analysis of clustered or longitudinal data with continuous, ordinal, or binary outcome and Monte Carlo results are presented to show that the method can improve the mean squared error of the fixed effects estimators when the random effects distribution is not Gaussian.
Abstract: A semiparametric mixed effects regression model is proposed for the analysis of clustered or longitudinal data with continuous, ordinal, or binary outcome. The common assumption of Gaussian random effects is relaxed by using a predictive recursion method (Newton and Zhang, 1999) to provide a nonparametric smooth density estimate. A new strategy is introduced to accelerate the algorithm. Parameter estimates are obtained by maximizing the marginal profile likelihood by Powell's conjugate direction search method. Monte Carlo results are presented to show that the method can improve the mean squared error of the fixed effects estimators when the random effects distribution is not Gaussian. The usefulness of visualizing the random effects density itself is illustrated in the analysis of data from the Wisconsin Sleep Survey. The proposed estimation procedure is computationally feasible for quite large data sets.

Journal ArticleDOI
TL;DR: In this article, a Lagrangian dynamic formulation of the mixed similarity subgrid (SGS) model for large-eddy simulation (LES) of turbulence is proposed, where averaging is performed over fluid trajectories, which makes the model applicable to complex flows without directions of statistical homogeneity.
Abstract: A Lagrangian dynamic formulation of the mixed similarity subgrid (SGS) model for large-eddy simulation (LES) of turbulence is proposed. In this model, averaging is performed over fluid trajectories, which makes the model applicable to complex flows without directions of statistical homogeneity. An alternative version based on a Taylor series expansion (nonlinear mixed model) is also examined. The Lagrangian models are implemented in a finite difference code and tested in forced and decaying isotropic turbulence. As comparison, the dynamic Smagorinsky model and volume-averaged formulations of the mixed models are also tested. Good results are obtained, except in the case of low-resolution LES (323) of decaying turbulence, where the similarity coefficient becomes negative due to the fact that the test-filter scale exceeds the integral scale of turbulence. At a higher resolution (643), the dynamic similarity coefficient is positive and good agreement is found between predicted and measured kinetic energy evolution. Compared to the eddy viscosity term, the similarity or the nonlinear terms contribute significantly to both SGS dissipation of kinetic energy and SGS force. In order to dynamically test the accuracy of the modeling, the error incurred in satisfying the Germano identity is evaluated. It is found that the dynamic Smagorinksy model generates a very large error, only 3% lower than the worst-case scenario without model. Addition of the similarity or nonlinear term decreases the error by up to about 50%, confirming that it represents a more realistic parameterization than the Smagorinsky model alone.

Journal ArticleDOI
TL;DR: Meuwissen et al. as mentioned in this paper applied a linear mixed model assuming heterogeneous residual variances and known constant variance ratios to the analysis of milk, fat, and protein yields, and fat and protein contents in the French Holstein, Montbeliarde, and Normande dairy cattle populations.

Journal ArticleDOI
TL;DR: In this article, a version of the nonlinear mixed-effects model is presented that allows random effects only on the linear coefficients, and data that are missing at random are allowed on the repeated measures or on the observed variables of the factor analysis submodel.
Abstract: A version of the nonlinear mixed-effects model is presented that allows random effects only on the linear coefficients. Nonlinear parameters are not stochastic. In nonlinear regression, this kind of model has been called conditionally linear. As a mixed-effects model, this structure is more flexible than the popular linear mixed-effects model, while being nearly as straightforward to estimate. In addition to the structure for the repeated measures, a latent variable model (Browne, 1993) is specified for a distinct set of covariates that are related to the random effects in the second level. Unbalanced data are allowed on the repeated measures, and data that are missing at random are allowed on the repeated measures or on the observed variables of the factor analysis submodel. Features of the model are illustrated by two examples. Multilevel models are widely used to study the effects of treatments or to characterize differences between intact groups in designs where individuals are hierarchically nested within random levels of a second variable. Comprehensive reviews of this methodology with emphasis on cluster sampling problems have been presented by Bock (1989), Bryk and Raudenbush (1992), and Goldstein (1987). Essentially the same technology is applied in the analysis of repeated measures. Instead of a model for subjects selected from units of an organization, the prototypical repeated measures design is a series of measurements for a particular individual randomly selected from a population. Recent texts by Crowder and Hand (1990), Davidian and Giltinan (1995), Diggle, Liang, and Zeger (1994), and Vonesh and Chinchilli (1997) contain overviews of this approach, including a variety of extensions and case studies. In the repeated measures context, the model is often called a mixed-effects model. Several characteristics make this model attractive for the study of repeated measures: (a) It allows for individual functions to differ from the mean function over the population of subjects, yet characterizes both population and individual patterns as members of a single response function; (b) Subjects can be measured This research was supported in part by National Institute of Mental Health grant MH 5-4576. The authors thank Dr. Scott Chaiken of the Armstrong Laboratory, Brooks Air Force Base, for generous permission to use the data of the second example.

Journal ArticleDOI
TL;DR: This work examines properties of the summary measures approach in the context of the linear mixed effects model when the data are not missing completely at random, in the sense that drop-out depends on the values of the repeated measures after conditioning on fixed covariates.
Abstract: Subjects often drop out of longitudinal studies prematurely, yielding unbalanced data with unequal numbers of measures for each subject. A simple and convenient approach to analysis is to develop summary measures for each individual and then regress the summary measures on between-subject covariates. We examine properties of this approach in the context of the linear mixed effects model when the data are not missing completely at random, in the sense that drop-out depends on the values of the repeated measures after conditioning on fixed covariates. The approach is compared with likelihood-based approaches that model the vector of repeated measures for each individual. Methods are compared by simulation for the case where repeated measures over time are linear and can be summarized by a slope and intercept for each individual. Our simulations suggest that summary measures analysis based on the slopes alone is comparable to full maximum likelihood when the data are missing completely at random but is markedly inferior when the data are not missing completely at random. Analysis discarding the incomplete cases is even worse, with large biases and very poor confidence coverage.

01 Jan 1999
TL;DR: Mixed model based composite interval mapping approaches are capable of handling genetic data derived from multiple environments and directly analyzing genetic main effects (including epistasis) and GE interaction effects.
Abstract: New QTL mapping methods based on mixed linear model approaches were proposed for analyzing complex genetic effects and their interaction with environments. Unbiased prediction can be applied for predicting genotype effects and genotype × environment interaction effects, which can then be further used for mapping QTL or developmental QTL with genetic main effects and GE interaction effects by interval mapping or composite interval mapping approaches. Mixed model based composite interval mapping approaches are capable of handling genetic data derived from multiple environments and directly analyzing genetic main effects (including epistasis) and GE interaction effects. Markov chain Monte Carlo (MCMC) methods can be applied to make inference for the statistical properties of QTL.

Journal ArticleDOI
TL;DR: A mixed effects model is developed for cross-over trials in which the response is measured repeatedly within each time period and allows both the between- and within-subject variance to differ among treatments.
Abstract: A mixed effects model is developed for cross-over trials in which the response is measured repeatedly within each time period. Relative to previous work on repeated measures cross-overs, the methodology synthesizes two important features. First, our procedure eliminates preliminary testing for carry-over, defined loosely as the component of a response that is due to treatment in the preceding period. This is achieved by generalizing the methodology to cross-over designs in which preliminary testing for carry-over is unnecessary. We focus largely on 'simple' carry-over, that is, carry-over that lasts for exactly one period and is independent of the treatment administered in the period in which the carry-over occurs. However, we also illustrate a modification of the procedure for a repeated measures cross-over design which uses a more complicated model of carry-over. Second, the model allows both the between- and within-subject variance to differ among treatments. Conditions are described wherein closed-form (CF) solutions to the variance components as well as closed-form hypothesis tests of the treatment differences exist. Flexibility in the model is illustrated with an example in which inference based on the CF likelihood-based estimates of the variance, and estimates formed using an iterative routine (PROC MIXED) are compared.

Journal ArticleDOI
TL;DR: A data augmentation approach to computational difficulties in which the algorithm is repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as ‘offsets’.
Abstract: Estimation in mixed linear models is, in general, computationally demanding, since applied problems may involve extensive data sets and large numbers of random effects. Existing computer algorithms are slow and/or require large amounts of memory. These problems are compounded in generalized linear mixed models for categorical data, since even approximate methods involve fitting of a linear mixed model within steps of an iteratively reweighted least squares algorithm. Only in models in which the random effects are hierarchically nested can the computations for fitting these models to large data sets be carried out rapidly. We describe a data augmentation approach to these computational difficulties in which we repeatedly fit an overlapping series of submodels, incorporating the missing terms in each submodel as ‘offsets’. The submodels are chosen so that they have a nested random-effect structure, thus allowing maximum exploitation of the computational efficiency which is available in this case. Examples of the use of the algorithm for both metric and discrete responses are discussed, all calculations being carried out using macros within the MLwiN program.

Journal ArticleDOI
TL;DR: A preconditioned conjugate gradient method was implemented into an iteration on a program for data estimation of breeding values, and its convergence characteristics were studied.

Journal ArticleDOI
TL;DR: In this article, the authors used angling from small recreational fishing boats to quantify the relative density of snapper (Pagrus auratus) in six areas within the Cape Rodney-Okakari Point Marine Reserve (New Zealand) and four areas adjacent to the reserve.
Abstract: Summary Angling from small recreational fishing boats was used as a sampling method to quantify the relative density of snapper (Pagrus auratus) in six areas within the Cape Rodney‐Okakari Point Marine Reserve (New Zealand) and four areas adjacent to the reserve. Penalized quasi-likelihood was used to fit a log-linear mixed-effects model having area and date as fixed effects and boat as a random effect. Simulation and first-order bias correction formulae were employed to assess the validity of the estimates of the area effects. The bias correction is known to be unsuitable for general use because it typically over-estimates bias, and this was observed here. However, it was qualitatively useful for indicating the direction of bias and for indicating when estimators were approximately unbiased. The parameter of primary interest was the ratio of snapper density in the marine reserve versus snapper density outside the reserve, and the estimator of this parameter was first-order asymptotically unbiased. This ratio of snapper densities was estimated to be 11 .3/:

Journal ArticleDOI
TL;DR: In this article, the applicability of a direct regression calibration approach in various situations and a bias correction regression calibration was discussed when a fixed effect covariate is measured with error in a generalized linear mixed effects model.
Abstract: Two major topics are discussed when a fixed effect covariate is measured with error in a generalized linear mixed effects model: (i) the applicability of a direct regression calibration approach in various situations and (ii) a bias correction regression calibration approach. When the fixed and random effect structures are both misspecified, we find that a direct bias correction to the naive estimators is often not feasible due to lack of analytical bias expressions. While the direct regression calibration approach still often leads to inconsistent estimators, a combination of using the regression calibration to correct for the misspecified fixed effects and applying direct bias correction to correct for the misspecified random effects provides a simple, fast, and easy to implement method. Applications of this approach to linear, loglinear, probit and logistic mixed models arc discussed in detail A small simulation study is presented for the logistic normal mixed model.

Journal ArticleDOI
TL;DR: The authors showed that when there is a single random effect factor in the model, those expectations do exist, provided the expectations of the resulting empirical BLUE and BLUP exist, and they also showed that the best linear unbiased estimator (BLUE) of β remains unbiased if the true variance components at which the BLUE or BLUP are computed are replaced by nonnegative, even and translation-invariant estimators.

Journal ArticleDOI
TL;DR: In this article, two standard mixed models with interactions are discussed and the mixed models controversy is resolved when viewed in the context of superpopulation models, and the tests suggested by the expected mean squares under the constrained-parameters model are correct for testing the main effects and interactions under both the unconstrained-and constrainedparameters models.
Abstract: Two standard mixed models with interactions are discussed. When each is viewed in the context of superpopulation models, the mixed models controversy is resolved. The tests suggested by the expected mean squares under the constrained-parameters model are correct for testing the main effects and interactions under both the unconstrained-and constrained-parameters models.

Journal ArticleDOI
TL;DR: The work reported in this article was undertaken to evaluate the utility of SAS PROC.MIXED for testing hypotheses concerning GROUP and TIME x GROUP effects in repeated measurements designs with drop-outs, and found a single random-coefficients model produced appropriate test sizes, but it provided inferior power when informative covariates were added in the attempt to adjust for dropouts.
Abstract: The work reported in this article was undertaken to evaluate the utility of SAS PROC.MIXED for testing hypotheses concerning GROUP and TIME × GROUP effects in repeated measurements designs with dropouts. If dropouts are not completely at random, covariate control over informative individual differences on which dropout data patterns depend is widely recognized to be important. However, the inclusion of baseline scores and time-in-study as between-subject covariates in an otherwise well formulated SAS PROC.MIXED model resulted in inadequate control over type I error in simulated data with or without dropouts present. The inadequate model formulations and resulting deviant test sizes are presented here as a warning for others who might be guided by the same information sources to employ similar model specifications when analyzing data from actual clinical trials. It is important that the complete model specification be provided in detail when reporting applications of the general linear mixed-model procedur...

Journal ArticleDOI
TL;DR: In this paper, a cross-validation based model selection method, the Predicted Residual Sum of Squares (PRESS), is applied to multivariate linear models with correlated errors.