scispace - formally typeset
Search or ask a question

Showing papers on "Random effects model published in 2004"


Journal ArticleDOI
TL;DR: A range of Bayesian hierarchical models using the Markov chain Monte Carlo software WinBUGS are presented that allow for variation in true treatment effects across trials, and models where the between-trials variance is homogeneous across treatment comparisons are considered.
Abstract: Mixed treatment comparison (MTC) meta-analysis is a generalization of standard pairwise meta-analysis for A vs B trials, to data structures that include, for example, A vs B, B vs C, and A vs C trials. There are two roles for MTC: one is to strengthen inference concerning the relative efficacy of two treatments, by including both 'direct' and 'indirect' comparisons. The other is to facilitate simultaneous inference regarding all treatments, in order for example to select the best treatment. In this paper, we present a range of Bayesian hierarchical models using the Markov chain Monte Carlo software WinBUGS. These are multivariate random effects models that allow for variation in true treatment effects across trials. We consider models where the between-trials variance is homogeneous across treatment comparisons as well as heterogeneous variance models. We also compare models with fixed (unconstrained) baseline study effects with models with random baselines drawn from a common distribution. These models are applied to an illustrative data set and posterior parameter distributions are compared. We discuss model critique and model selection, illustrating the role of Bayesian deviance analysis, and node-based model criticism. The assumptions underlying the MTC models and their parameterization are also discussed.

1,861 citations


Journal ArticleDOI
TL;DR: This work introduces to neuroimage modelling the approach of reference priors, which drives the choice of prior such that it is noninformative in an information-theoretic sense, and proposes two inference techniques at the top level for multilevel hierarchies.

1,582 citations


Journal ArticleDOI
TL;DR: In this paper, a general approach to estimating quantile regression models for longitudinal data is proposed employing l 1 regularization methods, based on the penalized least squares interpretation of the classical random effects estimator.

1,516 citations


Journal ArticleDOI
TL;DR: In this paper, a selection of panel studies appearing in the American Sociological Review and the American Journal of Sociology between 1990 and 2003 shows that sociologists have been slow to capitalize on the advantages of panel data for controlling unobservables that threaten causal inference in observational studies.
Abstract: A selection of panel studies appearing in the American Sociological Review and the American Journal of Sociology between 1990 and 2003 shows that sociologists have been slow to capitalize on the advantages of panel data for controlling unobservables that threaten causal inference in observational studies. This review emphasizes regression methods that capitalize on the strengths of panel data for consistently estimating causal parameters in models for metric outcomes when measured explanatory variables are correlated with unit-specific unobservables. Both static and dynamic models are treated. Among the major subjects are fixed versus random effects methods, Hausman tests, Hausman-Taylor models, and instrumental variables methods, including Arrelano-Bond and Anderson-Hsaio estimation for models with lagged endogenous variables.

885 citations


Journal ArticleDOI
01 Apr 2004-The Auk
TL;DR: In this paper, a generalized linear model is presented and illustrated that gives ornithologists access to a flexible, suitable alternative to logistic regression that is appropriate when exposure periods vary, as they usually do.
Abstract: Logistic regression has become increasingly popular for modeling nest success in terms of nest-specific explanatory variables. However, logistic regression models for nest fate are inappropriate when applied to data from nests found at various ages, for the same reason that the apparent estimator of nest success is biased (i.e. older clutches are more likely to be successful than younger clutches). A generalized linear model is presented and illustrated that gives ornithologists access to a flexible, suitable alternative to logistic regression that is appropriate when exposure periods vary, as they usually do. Unlike the Mayfield method (1961, 1975) and the logistic regression method of Aebischer (1999), the logistic-exposure model requires no assumptions about when nest losses occur. Nest survival models involving continuous and categorical explanatory variables, multiway classifications, and time-specific (e.g. nest age) and random effects are easily implemented with the logistic-exposure model...

803 citations


Journal ArticleDOI
TL;DR: In this article, the authors show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second-level standard errors and that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level-2 errors are not fulfilled.
Abstract: A multilevel problem concerns a population with a hierarchical structure. A sample from such a population can be described as a multistage sample. First, a sample of higher level units is drawn (e.g. schools or organizations), and next a sample of the sub-units from the available units (e.g. pupils in schools or employees in organizations). In such samples, the individual observations are in general not completely independent. Multilevel analysis software accounts for this dependence and in recent years these programs have been widely accepted. Two problems that occur in the practice of multilevel modeling will be discussed. The first problem is the choice of the sample sizes at the different levels. What are sufficient sample sizes for accurate estimation? The second problem is the normality assumption of the level-2 error distribution. When one wants to conduct tests of significance, the errors need to be normally distributed. What happens when this is not the case? In this paper, simulation studies are used to answer both questions. With respect to the first question, the results show that a small sample size at level two (meaning a sample of 50 or less) leads to biased estimates of the second-level standard errors. The answer to the second question is that only the standard errors for the random effects at the second level are highly inaccurate if the distributional assumptions concerning the level-2 errors are not fulfilled. Robust standard errors turn out to be more reliable than the asymptotic standard errors based on maximum likelihood.

752 citations


Journal ArticleDOI
TL;DR: A tractable gamma-process model incorporating a random effect is constructed fitted to some data on crack growth and corresponding goodness-of-fit tests are carried out and prediction calculations for failure times defined in terms of degradation level passages are developed and illustrated.
Abstract: The gamma process is a natural model for degradation processes in which deterioration is supposed to take place gradually over time in a sequence of tiny increments. When units or individuals are observed over time it is often apparent that they degrade at different rates, even though no differences in treatment or environment are present. Thus, in applying gamma-process models to such data, it is necessary to allow for such unexplained differences. In the present paper this is accomplished by constructing a tractable gamma-process model incorporating a random effect. The model is fitted to some data on crack growth and corresponding goodness-of-fit tests are carried out. Prediction calculations for failure times defined in terms of degradation level passages are developed and illustrated.

525 citations


Journal ArticleDOI
TL;DR: This meta-analysis confirms the existence of ERP deficits in schizophrenia, and the magnitude of these deficits is similar to the most robust findings reported in neuroimaging and neuropsychology in schizophrenia.

524 citations


Journal ArticleDOI
TL;DR: Simple Monte Carlo methods are derived that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured.
Abstract: There has been an increasing interest in using expected value of information (EVI) theory in medical decision making, to identify the need for further research to reduce uncertainty in decision and as a tool for sensitivity analysis. Expected value of sample information (EVSI) has been proposed for determination of optimum sample size and allocation rates in randomized clinical trials. This article derives simple Monte Carlo, or nested Monte Carlo, methods that extend the use of EVSI calculations to medical decision applications with multiple sources of uncertainty, with particular attention to the form in which epidemiological data and research findings are structured. In particular, information on key decision parameters such as treatment efficacy are invariably available on measures of relative efficacy such as risk differences or odds ratios, but not on model parameters themselves. In addition, estimates of model parameters and of relative effect measures in the literature may be heterogeneous, reflecting additional sources of variation besides statistical sampling error. The authors describe Monte Carlo procedures for calculating EVSI for probability, rate, or continuous variable parameters in multi parameter decision models and approximate methods for relative measures such as risk differences, odds ratios, risk ratios, and hazard ratios. Where prior evidence is based on a random effects meta-analysis, the authors describe different ESVI calculations, one relevant for decisions concerning a specific patient group and the other for decisions concerning the entire population of patient groups. They also consider EVSI methods for new studies intended to update information on both baseline treatment efficacy and the relative efficacy of 2 treatments. Although there are restrictions regarding models with prior correlation between parameters, these methods can be applied to the majority of probabilistic decision models. Illustrative worked examples of EVSI calculations are given in an appendix.

389 citations


Journal ArticleDOI
TL;DR: This paper examined several alternative approaches to stochastic frontier analysis with panel data, and applied some of them to the World Health Organization's (WHO) panel data set on health care delivery, which is a 191 country, 5-year panel.
Abstract: The most commonly used approaches to parametric (stochastic frontier) analysis of efficiency in panel data, notably the fixed and random effects models, fail to distinguish between cross individual heterogeneity and inefficiency. This blending of effects is particularly problematic in the World Health Organization's (WHO) panel data set on health care delivery, which is a 191 country, 5-year panel. The wide variation in cultural and economic characteristics of the worldwide sample produces a large amount of unmeasured heterogeneity in the data. This study examines several alternative approaches to stochastic frontier analysis with panel data, and applies some of them to the WHO data. A more general, flexible model and several measured indicators of cross country heterogeneity are added to the analysis done by previous researchers. Results suggest that there is considerable heterogeneity that has masqueraded as inefficiency in other studies using the same data.

375 citations


Journal ArticleDOI
01 Jun 2004-Ecology
TL;DR: In this article, a distance-sampling model that accommodates covariate effects on abundance is proposed, which is based on specification of the distance sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit).
Abstract: Distance-sampling methods are commonly used in studies of animal populations to estimate population density. A common objective of such studies is to evaluate the relationship between abundance or density and covariates that describe animal habitat or other environmental influences. However, little attention has been focused on methods of modeling abundance covariate effects in conventional distance-sampling models. In this paper we propose a distance-sampling model that accommodates covariate effects on abundance. The model is based on specification of the distance-sampling likelihood at the level of the sample unit in terms of local abundance (for each sampling unit). This model is augmented with a Poisson regression model for local abundance that is parameterized in terms of available covariates. Maximum-likelihood estimation of detection and density parameters is based on the integrated likelihood, wherein local abundance is removed from the likelihood by integration. We provide an example using avian point-transect data of Ovenbirds (Seiurus aurocapillus) collected using a distance-sampling protocol and two measures of habitat structure (understory cover and basal area of overstory trees). The model yields a sensible description (positive effect of understory cover, negative effect on basal area) of the relationship between habitat and Ovenbird density that can be used to evaluate the effects of habitat management on Ovenbird populations.

Journal ArticleDOI
TL;DR: In this paper, a random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates.
Abstract: A random effects model is proposed for the analysis of binary dyadic data that represent a social network or directed graph, using nodal and/or dyadic attributes as covariates. The network structure is reflected by modeling the dependence between the relations to and from the same actor or node. Parameter estimates are proposed that are based on an iterated generalized least-squares procedure. An application is presented to a data set on friendship relations between American lawyers.

Journal ArticleDOI
TL;DR: In this article, the authors consider mean squared errors (MSE) of empirical predictors under a general setup, where ML or REML estimators are used for the second stage.
Abstract: The term “empirical predictor” refers to a two-stage predictor of a linear combination of fixed and random effects. In the first stage, a predictor is obtained but it involves unknown parameters; thus, in the second stage, the unknown parameters are replaced by their estimators. In this paper, we consider mean squared errors (MSE) of empirical predictors under a general setup, where ML or REML estimators are used for the second stage. We obtain second-order approximation to the MSE as well as an estimator of the MSE correct to the same order. The general results are applied to mixed linear models to obtain a second-order approximation to the MSE of the empirical best linear unbiased predictor (EBLUP) of a linear mixed effect and an estimator of the MSE of EBLUP whose bias is correct to second order. The general mixed linear model includes the mixed ANOVA model and the longitudinal model as special cases.

Journal ArticleDOI
TL;DR: The proposed amplitude test does not suffer from delay-induced bias and that a model incorporating temporal derivatives is a more natural test for amplitude differences, which is applied in a random-effects analysis of 100 subjects.

Journal ArticleDOI
TL;DR: This report shows how the hierarchical summary receiver operating characteristic (HSROC) model may be fitted using the SAS procedure NLMIXED and to compare the results to the fully Bayesian analysis using an example.

Journal ArticleDOI
TL;DR: Overall, in between-study comparisons, panels of children with diagnosed asthma or pre-existing respiratory symptoms appeared less affected by PM10 levels than those without, and effect estimates were larger where studies were conducted in higher ozone conditions, so limited confidence may be placed on summary estimates of effect.
Abstract: Background: Panel studies have been used to investigate the short term effects of outdoor particulate air pollution across a wide range of environmental settings. Aims: To systematically review the results of such studies in children, estimate summary measures of effect, and investigate potential sources of heterogeneity. Methods: Studies were identified by searching electronic databases to June 2002, including those where outcomes and particulate level measurements were made at least daily for ⩾8 weeks, and analysed using an appropriate regression model. Study results were compared using forest plots, and fixed and random effects summary effect estimates obtained. Publication bias was considered using a funnel plot. Results: Twenty two studies were identified, all except two reporting PM 10 (24 hour mean) >50 μg.m −3 . Reported effects of PM 10 on PEF were widely spread and smaller than those for PM 2.5 (fixed effects summary: −0.012 v −0.063 l.min −1 per μg.m −3 rise). A similar pattern was evident for symptoms. Random effects models produced larger estimates. Overall, in between-study comparisons, panels of children with diagnosed asthma or pre-existing respiratory symptoms appeared less affected by PM 10 levels than those without, and effect estimates were larger where studies were conducted in higher ozone conditions. Larger PM 10 effect estimates were obtained from studies using generalised estimating equations to model autocorrelation and where results were derived by pooling subject specific regression coefficients. A funnel plot of PM 10 results for PEF was markedly asymmetrical. Conclusions: The majority of identified studies indicate an adverse effect of particulate air pollution that is greater for PM 2.5 than PM 10 . However, results show considerable heterogeneity and there is evidence consistent with publication bias, so limited confidence may be placed on summary estimates of effect. The possibility of interaction between particle and ozone effects merits further investigation, as does variability due to analytical differences that alter the interpretation of final estimates.

Book
01 Aug 2004
TL;DR: In this paper, a general framework of meta-analysis is presented for the analysis of effect sizes in the universe of studies using Monte Carlo methods, with a focus on estimating the sampling distribution of the effect sizes.
Abstract: Preface Introduction Theory: Statistical Methods of Meta-Analysis Effect Sizes Families of Effect Sizes The r Family: Correlation Coefficients as Effect Sizes The d Family: Standardized Mean Differences as Effect Sizes Conversion of Effect Sizes A General Framework of Meta-Analysis Fixed Effects Model Random Effects Model Mixture Models Classes of Situations for the Application of Meta-Analysis Approaches to Meta-Analysis Hedges and Olkin Procedures for r as Effect Size Procedures for d as Effect Size Rosenthal and Rubin Hunter and Schmidt Refined Approaches DerSimonian-Laird Olkin and Pratt Changes in Parameters to be Estimated by the Choice of an Approach Comparisons of the Approaches Summary Method: Monte Carlo Study Aims and General Procedure Distributions in the Universe of Studies Parameters Drawing Random Correlation Coefficients Approximations to the Sampling Distribution of r Evaluation of the Approximations Details of Programming Summary Results Preliminaries Estimation of Parameter [mu]r Bias and Accuracy Homogeneous Situation S1 Heterogeneous Situation S2 Heterogeneous Situation S3 Relative Efficiency of the Estimators Significance Tests: Testing [mu]r = 0 Confidence Intervals Homogeneity Tests The Q-Test Homogeneous Situation S1: Type I Error Rates Heterogeneous Situations S2 and S3: Power The Hunter-Schmidt Approach to the Test of Homogeneity: The 75 per cent and 90 per cent Rule Estimates of Heterogeneity Variance Homogeneous situation S1 Heterogeneous Situations S2 and S3 Summary Discussion List of Figures List of Tables Nomenclature References Appendix A: Technical Details of the Simulation Procedure Beta Distributions in the Universe of Effect Sizes An Annotated Mathematica[trademark] Notebook for a Comparison of Approximations to the Exact Density of R Appendix B: Tables and Figures of Results Estimation of the Parameter [mu]r Subject Index Author Index

Journal ArticleDOI
TL;DR: Two simple models for binary response data were studied, and the effects of assuming normality or of using a nonparametric fitting procedure for random effects, when the true distribution is potentially far from normal.

Journal ArticleDOI
TL;DR: In this article, the authors used an overdispersed Poisson regression with fixed and random effects, fitted by Markov chain Monte Carlo methods, to predict the abundance of the Dendroica cerulea in the Prairie-Hardwood Transition of the upper midwestern United States.
Abstract: Surveys collecting count data are the primary means by which abundance is indexed for birds. These counts are confounded, however, by nuisance effects including observer effects and spatial correlation between counts. Current methods poorly accom- modate both observer and spatial effects because modeling these spatially autocorrelated counts within a hierarchical framework is not practical using standard statistical approaches. We propose a Bayesian approach to this problem and provide as an example of its imple- mentation a spatial model of predicted abundance for the Cerulean Warbler (Dendroica cerulea) in the Prairie-Hardwood Transition of the upper midwestern United States. We used an overdispersed Poisson regression with fixed and random effects, fitted by Markov chain Monte Carlo methods. We used 21 years of North American Breeding Bird Survey counts as the response in a loglinear function of explanatory variables describing habitat, spatial relatedness, year effects, and observer effects. The model included a conditional autoregressive term representing potential correlation between adjacent route counts. Cat- egories of explanatory habitat variables in the model included land cover composition and configuration, climate, terrain heterogeneity, and human influence. The inherent hierarchy in the model was from counts occurring, in part, as a function of observers within survey routes within years. We found that the percentage of forested wetlands, an index of wetness potential, and an interaction between mean annual precipitation and deciduous forest patch size best described Cerulean Warbler abundance. Based on a map of relative abundance derived from the posterior parameter estimates, we estimated that only 15% of the species' population occurred on federal land, necessitating active engagement of public landowners and state agencies in the conservation of the breeding habitat for this species. Models of this type can be applied to any data in which the response is counts, such as animal counts, activity (e.g., nest) counts, or species richness. The most noteworthy practical application of this spatial modeling approach is the ability to map relative species abundance. The functional relationships that we elucidated for the Cerulean Warbler provide a basis for the development of management programs and may serve to focus management and monitoring on areas and habitat variables important to Cerulean Warblers.

Posted Content
TL;DR: In this paper, a review of linear panel data models with slope heterogeneity is presented, along with various types of random coefficients models and a common framework for dealing with them, and the fundamental issues of statistical inference of a random coefficients formulation using both sampling and Bayesian approaches.
Abstract: This paper provides a review of linear panel data models with slope heterogeneity, introduces various types of random coefficients models and suggest a common framework for dealing with them. It considers the fundamental issues of statistical inference of a random coefficients formulation using both the sampling and Bayesian approaches. The paper also provides a review of heterogeneous dynamic panels, testing for homogeneity under weak exogeneity, simultaneous equation random coefficient models, and the more recent developments in the area of cross-sectional dependence in panel data models.

Journal ArticleDOI
Erik Biørn1
TL;DR: In this article, the estimation of systems of regression equations with random individual effects from unbalanced panel data, where the unbalance is due to random attrition or accretion, by generalized least squares (GLS) and maximum likelihood (ML) is considered.

Book ChapterDOI
TL;DR: This chapter considers mixed-model regression analysis, which is a specific technique for analyzing longitudinal data that properly deals with within- and between-subjects variance, and applies nonlinear mixed- model regression analysis of the data at hand to demonstrate the considerable potential of this relatively novel statistical approach.
Abstract: Publisher Summary This chapter considers mixed-model regression analysis, which is a specific technique for analyzing longitudinal data that properly deals with within- and between-subjects variance. The term ‘‘mixed model’’ refers to the inclusion of both fixed effects, which are model components used to define systematic relationships such as overall changes over time and/ or experimentally induced group differences; and random effects, which account for variability among subjects around the systematic relationships captured by the fixed effects. To illustrate how the mixed-model regression approach can help analyze longitudinal data with large inter-individual differences, the psychomotor vigilance data is considered from an experiment involving 88 h of total sleep deprivation, during which subjects received either sustained low-dose caffeine or placebo. The traditional repeated-measures analysis of variance (ANOVA) is applied, and it is shown that that this method is not robust against systematic interindividual variability. The data are then reanalyzed using linear mixed-model regression analysis in order to properly take into account the interindividual differences. The study concludes with an application of nonlinear mixed-model regression analysis of the data at hand, to demonstrate the considerable potential of this relatively novel statistical approach.

Journal ArticleDOI
TL;DR: In this paper, Gibbs and block Gibbs samplers for a Bayesian hierarchical version of the one-way random effects model are considered and drift and minorization conditions are established for the underlying Markov chains.
Abstract: We consider Gibbs and block Gibbs samplers for a Bayesian hierarchical version of the one-way random effects model. Drift and minorization conditions are established for the underlying Markov chains. The drift and minorization are used in conjunction with results from J. S. Rosenthal [J. Amer. Statist. Assoc. 90 (1995) 558– 566] and G. O. Roberts and R. L. Tweedie [Stochastic Process. Appl. 80 (1999) 211–229] to construct analytical upper bounds on the distance to stationarity. These lead to upper bounds on the amount of burn-in that is required to get the chain within a prespecified (total variation) distance of the stationary distribution. The results are illustrated with a numerical example. 1. Introduction. We consider a Bayesian hierarchical version of the standard normal theory one-way random effects model. The posterior density for this model is intractable in the sense that the integrals required for making inferences cannot be computed in closed form. Hobert and Geyer (1998) analyzed a Gibbs sampler and a block Gibbs sampler for this problem and showed that the Markov chains underlying these algorithms converge to the stationary (i.e., posterior) distribution at a geometric rate. However, Hobert and Geyer stopped short of constructing analytical upper bounds on the total variation distance to stationarity. In this article, we construct such upper bounds and this leads to a method for determining a sufficient burn-in. Our results are useful from a practical standpoint because they obviate troublesome, ad hoc convergence diagnostics [Cowles and Carlin (1996) and

Journal ArticleDOI
TL;DR: A linear mixed model with a smooth random effects density is proposed and is applied to the cholesterol data first analyzed by Zhang and Davidian and shows that it yields almost unbiased estimates of the regression and the smoothing parameters in small sample settings.
Abstract: A linear mixed model with a smooth random effects density is proposed. A similar approach to P-spline smoothing of Eilers and Marx (1996, Statistical Science 11, 89-121) is applied to yield a more flexible estimate of the random effects density. Our approach differs from theirs in that the B-spline basis functions are replaced by approximating Gaussian densities. Fitting the model involves maximizing a penalized marginal likelihood. The best penalty parameters minimize Akaike's Information Criterion employing Gray's (1992, Journal of the American Statistical Association 87, 942-951) results. Although our method is applicable to any dimensions of the random effects structure, in this article the two-dimensional case is explored. Our methodology is conceptually simple, and it is relatively easy to fit in practice and is applied to the cholesterol data first analyzed by Zhang and Davidian (2001, Biometrics 57, 795-802). A simulation study shows that our approach yields almost unbiased estimates of the regression and the smoothing parameters in small sample settings. Consistency of the estimates is shown in a particular case.

Posted Content
TL;DR: This article reviewed quantitative methods that have been employed and evidence that has been gathered to assess the benefits of marriage and consequences of other family structures and discussed models of the determinants of different well-being outcomes and the role of family structure in producing those outcomes.
Abstract: This study critically reviews quantitative methods that have been employed and evidence that has been gathered to assess the benefits of marriage and consequences of other family structures. The study begins by describing theoretical models of the determinants of different well-being outcomes and the role of family structure in producing those outcomes. It also discusses models of the determinants of marriage. The study then overviews specific statistical techniques that have been applied in empirical analyses of the effects of marriage, including standard regression, instrumental variables, selection and switching models, matching, non-parametric bounds, fixed effects, and latent factor (correlated random effects) methods. The study then reviews selected studies that have been completed in three domains of well-being outcomes: children's well-being, adults' earnings, and adults' physical health.

Journal ArticleDOI
TL;DR: Using bivariate longitudinal measurements on pure-tone hearing thresholds, it will be shown that such a random-effects approach can yield misleading results for evaluating the relation between the different responses.
Abstract: Due to its flexibility, the random-effects approach for the joint modelling of multivariate longitudinal profiles received a lot of attention in recent publications. In this approach different mixed models are joined by specifying a common distribution for their random-effects. Parameter estimates of this common distribution can then be used to evaluate the relation between the different responses. Using bivariate longitudinal measurements on pure-tone hearing thresholds, it will be shown that such a random-effects approach can yield misleading results for evaluating this relationship.

Journal ArticleDOI
TL;DR: A mixture model that combines short‐ and long‐term components of a hazard function provides a more flexible model for the hazard function, which can incorporate different explanatory variables and random effects in each component.
Abstract: Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data.

Journal ArticleDOI
TL;DR: In this paper, the authors discuss the interpretation of predictions formed including or excluding random terms, and show the need for different weighting schemes that recognize nesting and aliasing during prediction, and the necessity of being able to detect inestimateable predictions.
Abstract: Summary Following estimation of effects from a linear mixed model, it is often useful to form predicted values for certain factor/variate combinations. The process has been well defined for linear models, but the introduction of random effects into the model means that a decision has to be made about the inclusion or exclusion of random model terms from the predictions. This paper discusses the interpretation of predictions formed including or excluding random terms. Four datasets are used to illustrate circumstances where different prediction strategies may be appropriate: in an orthogonal design, an unbalanced nested structure, a model with cubic smoothing spline terms and for kriging after spatial analysis. The examples also show the need for different weighting schemes that recognize nesting and aliasing during prediction, and the necessity of being able to detect inestimable predictions.


Journal ArticleDOI
TL;DR: In this paper, the effects of information on residential demand for electricity, using data from a Japanese experiment, were measured using a continuous display, electricity use monitoring device installed at their residence.
Abstract: This paper measures the effects of information on residential demand for electricity, using data from a Japanese experiment. In the experiment, households had a continuous-display, electricity use monitoring device installed at their residence. The monitor was designed so that each consumer could easily look at graphs and tables associated with the consumer's own usage of electricity at any time during the experiment. The panel data were used to estimate a random effects model of electricity and count data models of monitor usage. The results indicate that monitor usage contributed to energy conservation.