scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 1995"


Journal ArticleDOI
TL;DR: In this paper, a framework for efficient IV estimators of random effects models with information in levels which can accommodate predetermined variables is presented. But the authors do not consider models with predetermined variables that have constant correlation with the effects.

16,245 citations


Journal ArticleDOI
TL;DR: The random-effects regression method performs well in the context of a meta-analysis of the efficacy of a vaccine for the prevention of tuberculosis, where certain factors are thought to modify vaccine efficacy.
Abstract: Many meta-analyses use a random-effects model to account for heterogeneity among study results, beyond the variation associated with fixed effects. A random-effects regression approach for the synthesis of 2 x 2 tables allows the inclusion of covariates that may explain heterogeneity. A simulation study found that the random-effects regression method performs well in the context of a meta-analysis of the efficacy of a vaccine for the prevention of tuberculosis, where certain factors are thought to modify vaccine efficacy. A smoothed estimator of the within-study variances produced less bias in the estimated regression coefficients. The method provided very good power for detecting a non-zero intercept term (representing overall treatment efficacy) but low power for detecting a weak covariate in a meta-analysis of 10 studies. We illustrate the model by exploring the relationship between vaccine efficacy and one factor thought to modify efficacy. The model also applies to the meta-analysis of continuous outcomes when covariates are present.

743 citations


Journal ArticleDOI
Philip Hougaard1
TL;DR: A frailty model is a random effects model for time variables, where the random effect (the frailty) has a multiplicative effect on the hazard.
Abstract: A frailty model is a random effects model for time variables, where the random effect (the frailty) has a multiplicative effect on the hazard. It can be used for univariate (independent) failure times, i.e. to describe the influence of unobserved covariates in a proportional hazards model. More interesting, however, is to consider multivariate (dependent) failure times generated as conditionally independent times given the frailty. This approach can be used both for survival times for individuals, like twins or family members, and for repeated events for the same individual. The standard assumption is to use a gamma distribution for the frailty, but this is a restriction that implies that the dependence is most important for late events. More generally, the distribution can be stable, inverse Gaussian, or follow a power variance function exponential family. Theoretically, large differences are seen between the choices. In practice, using the largest model makes it possible to allow for more general dependence structures, without making the formulas too complicated.

541 citations


Journal ArticleDOI
TL;DR: It is described how a full Bayesian analysis can deal with unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random- effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates.
Abstract: Current methods for meta-analysis still leave a number of unresolved issues, such as the choice between fixed- and random-effects models, the choice of population distribution in a random-effects analysis, the treatment of small studies and extreme results, and incorporation of study-specific covariates. We describe how a full Bayesian analysis can deal with these and other issues in a natural way, illustrated by a recent published example that displays a number of problems. Such analyses are now generally available using the BUGS implementation of Markov chain Monte Carlo numerical integration techniques. Appropriate proper prior distributions are derived, and sensitivity analysis to a variety of prior assumptions carried out. Current methods are briefly summarized and compared to the full Bayes analysis.

535 citations


Journal ArticleDOI
TL;DR: In this article, the authors evaluate two software packages that are available for fitting multilevel models to binary response data, namely VARCL and ML3, by using a Monte Carlo study designed to represent quite closely the actual structure of a data set used in an analysis of health care utilization in Guatemala.
Abstract: We evaluate two software packages that are available for fitting multilevel models to binary response data, namely VARCL and ML3, by using a Monte Carlo study designed to represent quite closely the actual structure of a data set used in an analysis of health care utilization in Guatemala. We find that the estimates of fixed effects and variance components produced by the software packages are subject to very substantial downward bias when the random effects are sufficiently large to be interesting. In fact, the fixed effect estimates are no better than the estimates obtained by using standard logit models that ignore the hierarchical structure of the data. The estimates of standard errors appear to be reasonably accurate and superior to those obtained by ignoring clustering, although one might question their utility in the presence of large biases. We conclude that alternative estimation procedures need to be developed and implemented for the binary response case

497 citations


Journal ArticleDOI
TL;DR: A Bayesian model in which both area-specific intercept and trend are modelled as random effects and correlation between them is allowed for is proposed, an extension of that originally proposed for disease mapping.
Abstract: The analysis of variation of risk for a given disease in space and time is a key issue in descriptive epidemiology. When the data are scarce, maximum likelihood estimates of the area-specific risk and of its linear time-trend can be seriously affected by random variation. In this paper, we propose a Bayesian model in which both area-specific intercept and trend are modelled as random effects and correlation between them is allowed for. This model is an extension of that originally proposed for disease mapping. It is illustrated by the analysis of the cumulative prevalence of insulin dependent diabetes mellitus as observed at the military examination of 18-year-old conscripts born in Sardinia during the period 1936-1971. Data concerning the genetic differentiation of the Sardinian population are used to interpret the results.

446 citations


Journal ArticleDOI
TL;DR: In this article, an algorithm is described to estimate variance components for a univariate animal model using sparse matrix techniques, where residuals and fitted values for random effects are used to derive additional right-hand sides for which the mixed model equations can be solved in turn to yield an average of the observed and expected second derivatives of the likelihood function.

347 citations


Journal ArticleDOI
TL;DR: In this paper, the authors present simple hierarchical centring reparametrisations that often give improved convergence for a broad class of normal linear mixed models, including the Laird-Ware model, and a general structure for hierarchically nested linear models.
Abstract: SUMMARY The generality and easy programmability of modern sampling-based methods for maximisation of likelihoods and summarisation of posterior distributions have led to a tremendous increase in the complexity and dimensionality of the statistical models used in practice. However, these methods can often be extremely slow to converge, due to high correlations between, or weak identifiability of, certain model parameters. We present simple hierarchical centring reparametrisations that often give improved convergence for a broad class of normal linear mixed models. In particular, we study the two-stage hierarchical normal linear model, the Laird-Ware model for longitudinal data, and a general structure for hierarchically nested linear models. Using analytical arguments, simulation studies, and an example involving clinical markers of acquired immune deficiency syndrome (AIDS), we indicate when reparametrisation is likely to provide substantial gains in efficiency.

318 citations


Journal ArticleDOI
TL;DR: This paper develops a class of models to deal with missing data from longitudinal studies that allow the primary response, conditional on the random parameter, to follow a generalizedlinear model and approximate the generalized linear model by conditioning on the data that describes missingness.
Abstract: This paper develops a class of models to deal with missing data from longitudinal studies. We assume that separate models for the primary response and missingness (e.g., number of missed visits) are linked by a common random parameter. Such models have been developed in the econometrics (Heckman, 1979, Econometrica 47, 153-161) and biostatistics (Wu and Carroll, 1988, Biometrics 44, 175-188) literature for a Gaussian primary response. We allow the primary response, conditional on the random parameter, to follow a generalized linear model and approximate the generalized linear model by conditioning on the data that describes missingness. The resultant approximation is a mixed generalized linear model with possibly heterogeneous random effects. An example is given to illustrate the approximate approach, and simulations are performed to critique the adequacy of the approximation for repeated binary data.

303 citations


Journal ArticleDOI
TL;DR: This article proposed an EM algorithm to estimate the cumulative baseline hazard and the variance of the random effect in the frailty model, which is a generalization of Cox's proportional hazards model.
Abstract: The frailty model is a generalization of Cox's proportional hazards model which includes a random effect. Nielsen, Gill, Andersen and Sorensen (1992) proposed an EM algorithm to estimate the cumulative baseline hazard and the variance of the random effect. Here the asymptotic distribution of the estimators is given along with a consistent estimator of the asymptotic variance.

276 citations


Journal ArticleDOI
TL;DR: In this article, the authors derived three LM statistics for an error component model with first-order serially correlated errors, and the corresponding LM statistic is the same whether the alternative is AR(1) or MA(1).

Journal ArticleDOI
TL;DR: In this article, a general method of adjusting any conveniently defined initial estimates to result in estimates which are asymptotically unbiased and consistent is proposed, motivated by iterative bias correction and can be applied to any parametric model.
Abstract: SUMMARY Obtaining estimates that are nearly unbiased has proven to be difficult when random effects are incorporated into a generalized linear model. In this paper, we propose a general method of adjusting any conveniently defined initial estimates to result in estimates which are asymptotically unbiased and consistent. The method is motivated by iterative bias correction and can be applied in principle to any parametric model. A simulation-based approach of implementing the method is described and the relationship of the method proposed with other sampling-based methods is discussed. Results from a small scale simulation study show that the method proposed can lead to estimates which are nearly unbiased even for the variance components while the standard errors are only slightly inflated. A new analysis of the famous salamander mating data is described which reveals previously undetected between-animal variation among the male salamanders and results in better prediction of mating outcomes.

Journal ArticleDOI
TL;DR: In this paper, a random effects model is used to derive mean and variance models for estimated disease rates and covariate data from random samples of individuals from each of several cohorts, which are then developed by replacing cohort covariate averages by corresponding sample averages.
Abstract: SUMMARY Statistical methods are proposed for estimating relative rate parameters, based on estimated disease rates and covariate data from random samples of individuals from each of several cohorts. A random effects model is used to derive mean and variance models for estimated disease rates. Estimating equations for relative rate parameters are then developed by replacing cohort covariate averages by corresponding sample averages. The asymptotic distribution of regression parameter estimates is derived, and the asymptotic bias is shown to be small, even if covariates are contaminated by classical random measurement errors, provided the covariate sample size in each cohort is not small. Simulation studies, motivated by international data on diet and breast cancer, provide insights into the properties of the proposed estimators.

Journal ArticleDOI
TL;DR: This article examined the effect of marital duration and number of children on spousal interaction in intact marriages in a four-wave panel sample, and explored the fixed effect and random effects models based on pooled time-series data and discussed their advantages and disadvantages for the analysis of survey panel data compared with other available approaches.
Abstract: This article examines quantitative panel analysis techniques appropriate for situations in which the researcher models determinants of change in continuous outcomes. One set of techniques, based on the analysis of pooled time-series data sets, has received little attention in the family literature, although there are a number of situations where these techniques would be appropriate, such as the analysis of multiple-wave panel data. The article explores the fixed effect (change-scare) and random effects models based on pooled time-series data and discusses their advantages and disadvantages for the analysis of survey panel data compared with other available approaches. An empirical example of these methods is presented in which the effect of marital duration and number of children on spousal interaction in intact marriages was examined in a four-wave panel sample. A significant development in the field of family research has been the increased availability of large panel survey data sets containing variables of interest to family researchers (e.g., National Survey of Families and Households Panel, Panel Study of Income Dynamics, National Longitudinal Survey of Youth, Marital Instability Over the Life Course Panel). The potential posed by these panel studies is often unrealized because of the greater complexity of panel analyses and the shortage of clear and accessible guidelines for selecting the appropriate analysis models and statistical software. Several guides to panel methods are available (e.g., Campbell, Mutran, & Parker, 1986; Collins & Horn, 1991; Finkel, 1995; Johnson, 1988; Kessler & Greenberg, 1981; Markus, 1979; Menard, 1991). While these guides can be of value, they have the following drawbacks: Some recent developments are not covered, the focus has been largely on methods for two-wave panels, and situations commonly encountered by family researchers in the analysis of survey data, such as missing waves for some respondents, have been neglected. One type of model commonly found in family research involves the analysis of change over time in a continuous dependent variable. For example, a researcher interested in explaining change in marital interaction over the course of a marriage may model the effects of increased duration of the marriage, changes in the number and ages of the children, spells of employment of both spouses, and changes in family income. This situation involves a continuous dependent variable (degree of spousal interaction) with continuous variables (marital duration, income) and events (addition and subtraction of children, spells of employment and unemployment) as explanatory variables. Panel studies of changes in psychological distress brought about by marital dissolution, changes in frequency of sexual intercourse over the duration of the marriage, and the effect of retirement on marital happiness are research problems with similar analytic needs. With two or more waves of panel data and a continuous dependent variable, the researcher has a choice among five basic panel analysis models. These are (a) regression with lagged dependent variables, (b) structural equation models with reciprocal and lagged effects (e.g., LISREL), (c) repeated measures analysis of variance, (d) growth curve and hierarchical effects models, and (e) fixed (change-score) and random effects regression estimators for pooled time-series data sets. Event history, or hazard, models (Teachman, 1982) and panel models for qualitative variables (Clogg, Eliason, & Grego, 1990) are excluded from this list because they are not designed for use with continuous dependent variables. The first three techniques have been widely used with panel data in the family literature, and are generally accessible to the researcher. Many of the regression techniques have been superseded by the structural equation approaches, which are capable of estimating reciprocal effects and control for biases introduced by measurement errors and autocorrelated errors. …

Journal ArticleDOI
TL;DR: In this article, the authors provide an overview of the rationale behind, and the implementation, and uses of the random coefficient approach to econometric modelling, and a simple random coefficient model is presented, and methods for estimating, testing, and validating such a model are described.
Abstract: . This paper provides an overview of the rationale behind, and the implementation, and uses of, the random coefficient approach to econometric modelling. A simple random coefficient model is presented, and methods for estimating, testing, and validating such a model are described. A more general model is then presented. The general model is shown to include several fixed-coefficient models as special cases and can be estimated incorporating a variety of judgements concerning simplification. Finally, the paper reviews recent applications of random coefficient estimation.

Journal ArticleDOI
TL;DR: In this article, a goodness-of-fit test for generalized linear models with canonical link function and known dispersion parameter is proposed, which is based on the score test for extra variation in a random effects model.
Abstract: This paper considers testing the goodness-of-fit of regression models. Emphasis is on a goodness-of-fit test for generalized linear models with canonical link function and known dispersion parameter. The test is based on the score test for extra variation in a random effects model. By choosing a suitable form for the dispersion matrix, a goodness-of-fit test statistic is obtained which is quite similar to test statistics based on non-parametric kernel methods. We consider the distribution of the test statistic and discuss the choice of the dispersion matrix. The testing method can handle models with continuous and discrete covariates. Corrections for bias when parameters are estimated are available and extensions to models with unknown dispersion parameters, and more general nonlinear models are discussed. The proposed goodness-of-fit method is demonstrated in a simulation study and on real data of bone marrow transplant patients. The individual contributions of observations to the test statistic are used to perform residual analyses.

Journal ArticleDOI
TL;DR: A multiple lactation model for test day data was applied to predict genetic merit for somatic cell scores of Canadian Holsteins, found to be practical and desirable for EBV with EBV for some udder conformation traits, but undesirable forEBV for milk and protein yield.

Journal ArticleDOI
TL;DR: This paper proposes a graphical method for assessing adequacy of the proportional hazards frailty models and focuses on the assessment of the gamma distribution assumption for the frailties.
Abstract: Proportional hazards frailty models use a random effect, so called frailty, to construct association for clustered failure time data. It is customary to assume that the random frailty follows a gamma distribution. In this paper, we propose a graphical method for assessing adequacy of the proportional hazards frailty models. In particular, we focus on the assessment of the gamma distribution assumption for the frailties. We calculate the average of the posterior expected frailties at several followup time points and compare it at these time points to 1, the known mean frailty. Large discrepancies indicate lack of fit. To aid in assessing the goodness of fit, we derive and estimate the standard error of the mean of the posterior expected frailties at each time point examined. We give an example to illustrate the proposed methodology and perform sensitivity analysis by simulations.

Journal ArticleDOI
TL;DR: In this article, a random-effects model for multivariate and grouped univariate ordered categorical data is presented, where the assumed family of distributions for the random effects adopts a wide variety of forms and shapes and can be computed without recourse to numerical integration or Gaussian quadrature.
Abstract: This article presents a random-effects model for multivariate and grouped univariate ordered categorical data. The assumed family of distributions for the random effects adopts a wide variety of forms and shapes. The model's likelihood has a closed expression and can be computed without recourse to numerical integration or Gaussian quadrature. A bivariate and a grouped univariate example are used to illustrate the proposed model.

Journal ArticleDOI
TL;DR: This paper proposed a score test for a generalized linear model with random effect, in which the distribution of the response variable given the random effect is entirely defined. But, unlike the likelihood ratio test, the score test does not require estimation of the parameters of a mixed-effects model nor specification of the mixing distribution.
Abstract: We propose two tests for testing homogeneity among clustered data adjusting for the effects of covariates. The first is a score test for a generalized linear model with random effect, in which the distribution of the response variable given the random effect is entirely defined. In contrast to the likelihood ratio test, however, the score test does not require estimation of the parameters of a mixed-effects model nor specification of the mixing distribution. The second test is proposed in the framework of the generalized estimating equation (GEE) approach. In deriving this test, we need only the specification of the marginal expectation and variance of the response variable and the fourth moment for the overdispersion term, whereas for deriving the score test for mixed effects models, the entire conditional distribution must be specified. We demonstrate that the two tests are identical when the covariance matrix assumed in the GEE approach is that of the random-effects model. In both approaches, ...

Journal ArticleDOI
01 Dec 1995-Metrika
TL;DR: In this paper, the authors proposed the minimum norm quadratic unbiased estimation (MINQUE) method, which is closely related to the restricted maximum likelihood (REML) but with fewer advantages.
Abstract: Variance components estimation originated with estimating error variance in analysis of variance by equating error mean square to its expected value. This equating procedure was then extended to random effects models, first for balanced data (for which minimum variance properties were subsequently established) and later for unbalanced data. Unfortunately, this ANOVA methodology yields no optimum properties (other than unbiasedness) for estimation from unbalanced data. Today it is being replaced by maximum likelihood (ML) and restricted maximum likelihood (REML) based on normality assumptions and involving nonlinear equations that have to be solved numerically. There is also minimum norm quadratic unbiased estimation (MINQUE) which is closely related to REML but with fewer advantages.

Journal ArticleDOI
TL;DR: Simulation studies provide insight into the efficiency and bias of relative rate parameter estimates with respect to covariate dispersion, confounding, and covariate measurement error.
Abstract: Comparisons of individual- and aggregate-level analyses of data from a multigroup observational study are made using an exponential form relative rate model. Stratified, analytic random effects, and aggregate random effects analyses are studied. Estimating equations are developed to give a consistent estimation procedure across analyses and corresponding information matrices are compared. Simulation studies provide insight into the efficiency and bias of relative rate parameter estimates with respect to covariate dispersion, confounding, and covariate measurement error.

Journal ArticleDOI
TL;DR: In this paper, the authors compared the results of different diagonalization levels for different types of conformation traits and found that the error of the estimates was 2 to 10 times lower than the fraction of non-diagonalizable (co)variances because 57 to 91% of these variances were recovered after the backtransformation step.

Journal ArticleDOI
TL;DR: In this paper, the authors focus on three crucial choices identified in recent meta-analyses, namely (a) using approximate statistical techniques rather than exact methods, (fa) the effect of using fixed or random effect models, and (b) effect of publication bias on the meta-analysis result.
Abstract: Summary This paper focusses mainly on three crucial choices identified in recent meta-analyses, namely (a) the effect of using approximate statistical techniques rather than exact methods, (fa) the effect of using fixed or random effect models, and (c) the effect of publication bias on the meta-analysis result. The paper considers their impact on a set of over thirty studies of passive smoking and lung cancer in non-smokers, and addresses other issues such as the role of study comparability, the choice of raw or adjusted data when using published summary statistics, and the effect of biases such as misclassification of subjects and study quality. The paper concludes that, at least in this example, different conclusions might be drawn from metaanalyses based on fixed or random effect models; that exact methods might increase estimated confidence interval widths by 5–20% over standard approximate (logit and Mantel-Haenszel) methods, and that these methods themselves differ by this order of magnitude; that taking study quality into account changes some results, and also improves homogeneity; that the use of unadjusted or author-adjusted data makes limited difference; that there appears to be obvious publication bias favouring observed raised relative risks; and that the choice of studies for inclusion is the single most critical choice made by the modeller.

Journal ArticleDOI
TL;DR: Most of the non-additive genetic variance in the traits studied is accounted for by dominance genetic effects, including dominance plus additive x additive (a:a) effects.
Abstract: Dominance and additive x additive genetic variances were estimated for birth and weaning traits of calves from three synthetic lines of beef cattle differing in mature size. Data consisted of 3,992 and 2,877 records from lines of small-, medium-, and large-framed calves in each of two research herds located at Rhodes and McNay, IA, respectively. Variance components were estimated separately by herd and line for birth weight (BWT), birth hip height (BH), 205-d weight (WW), and 205-d hip height (WH) by derivative-free REML with an animal model. Model 1 included fixed effects of year, sex, and age of dam. Random effects were additive direct (a) and additive maternal (m) genetic with covariance (a,m), maternal permanent environmental, and residual. Model 2 also included dominance (d) and model 3 included dominance plus additive x additive (a:a) effects. In general, only slight changes occurred in other variance components estimates when day was included in Model 2. However, large estimates of additive x additive genetic variances obtained with Model 3 for 4 out of 24 analyses were associated with reductions in estimates of direct additive variances. Direct (maternal) heritability estimates averaged across herd-line combinations with Model 2 were .53(.11), .42(.04), .27(.12), and .35(.04) for BWT, BH, WW, and WH, respectively. Corresponding covariance (a,m) estimates as fractions of phenotypic variance (sigma p2) were .00, .01, .01, and .06, respectively. For maternal permanent environmental effects in Model 2, average estimates of variances as fractions of sigma p2 across herd-line combinations were .03, .00, .05, and .02, for BW, BH, WW, and WH, respectively. Dominance effects explained, on average, 18, 26, 28, and 11% of total variance for BWT, BH, WW, and WH, respectively. Most of the estimates for additive x additive variances were negligible, except for one data set for BWT, two for BH, and one for WH, where the relative estimates of this component were high (.21 to .45). These results suggest that most of the non-additive genetic variance in the traits studied is accounted for by dominance genetic effects.

Journal ArticleDOI
TL;DR: The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability was inefficiently estimated with most sampling schedules of the two designs.
Abstract: Monte Carlo simulation technique used to evaluate the effect of the arrangement of concentrations on the efficiency of estimation of population pharmacokinetic parameters in the preclinical setting is described. Although the simulations were restricted to the one compartment model with intravenous bolus input, they provide the basis of discussing some structural aspects involved in designing a destructive (“quantic”) preclinical population pharmacokinetic study with a fixed sample size as is usually the case in such studies. The efficiency of parameter estimation obtained with sampling strategies based on the three and four time point designs were evaluated in terms of the percent prediction error, design number, individual and joint confidence intervals coverage for parameter estimates approaches, and correlation analysis. The data sets contained random terms for both inter- and residual intra-animal variability. The results showed that the typical population parameter estimates for clearance and volume were efficiently (accurately and precisely) estimated for both designs, while interanimal variability (the only random effect parameter that could be estimated) was inefficiently (inaccurately and imprecisely) estimated with most sampling schedules of the two designs. The exact location of the third and fourth time point for the three and four time point designs, respectively, was not critical to the efficiency of overall estimation of all population parameters of the model. However, some individual population pharmacokinetic parameters were sensitive to the location of these times.

Journal ArticleDOI
TL;DR: In this article, generalized score and Wald tests are proposed to examine certain aspects of the specification of panel probit models, such as omitted and superfluous variables, heteroscedasticity, nonnormality, and random-coefficient variations.
Abstract: Generalized score and Wald tests are proposed to examine certain aspects of the specification of panel probit models. The procedures used to detect misspecifications, such as omitted and superfluous variables, heteroscedasticity, nonnormality, and random-coefficient variations, are designed specifically for the case in which the model is estimated by sequential methods. The finite-sample distributions of the test statistics in correctly and incorrectly specified models are investigated by a Monte Carlo study. Applying the proposed procedures to the analysis of the determinants of self-employment in Germany shows that the suggested specification checks are useful instruments in applied work.

Journal ArticleDOI
TL;DR: In this article, a scale-change model is proposed to incorporate unobserved heterogeneity through a random effect that enters the baseline hazard function to change the time scale of the hazard function.
Abstract: Frailty models are effective in broadening the class of survival models and inducing dependence in multivariate survival distributions. In proportional hazards, the random effect multiplies the hazard function. The scale-change model incorporates unobserved heterogeneity through a random effect that enters the baseline hazard function to change the time scale. We interpret this random effect as frailty, or other unobserved risks that create heterogeneity in the population. This model produces a wide range of shapes for univariate survival and hazard functions. We extend this model to multivariate survival data by assuming that members of a group share a common random effect. This structure induces association among the survival times in a group and provides alternative association structures to the proportional hazards frailty model. We present parametric and semiparametric estimation techniques and illustrate these methods with an example.


Journal ArticleDOI
TL;DR: An EM algorithm is developed to compute the maximum likelihood estimates of regression coefficients of the fixed effects and random effects, and variance components, and the likelihood ratio test is used for the preliminary testing of batch-to-batch variation.
Abstract: This paper proposes a normal mixed effects model for stability analysis. An EM algorithm is developed to compute the maximum likelihood estimates of regression coefficients of the fixed effects and random effects, and variance components. The likelihood ratio test is used for the preliminary testing of batch-to-batch variation. An example from a marketing stability study is given to illustrate the proposed procedure.