scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Using the Effective Sample Size as the Stopping Criterion in Markov Chain Monte Carlo with the Bayes Module in Mplus

30 Jul 2021-Psychosomatics (Multidisciplinary Digital Publishing Institute)-Vol. 3, Iss: 3, pp 336-347
TL;DR: In this article, a multilevel structural equation model was used to fit a large number of simulated data sets and compared different prespecified minimum ESS values with the actual (empirical) ESS.
About: This article is published in Psychosomatics.The article was published on 2021-07-30 and is currently open access. It has received 6 citations till now. The article focuses on the topics: Markov chain Monte Carlo & Bayesian inference.
Citations
More filters
Journal ArticleDOI
TL;DR: This article defines the Bayesian EAPs, discusses a way for estimating them, and shows how their estimates can be used to obtain the interaction and the quadratic effects of explanatory variables.
Abstract: Croon and van Veldhoven discussed a model for analyzing micro–macro multilevel designs in which a variable measured at the upper level is predicted by an explanatory variable that is measured at the lower level. Additionally, the authors proposed an approach for estimating this model. In their approach, estimation is carried out by running a regression analysis on Bayesian Expected a Posterior (EAP) estimates. In this article, we present an extension of this approach to interaction and quadratic effects of explanatory variables. Specifically, we define the Bayesian EAPs, discuss a way for estimating them, and we show how their estimates can be used to obtain the interaction and the quadratic effects. We present the results of a “proof of concept” via Monte Carlo simulation, which we conducted to validate our approach and to compare two resampling procedures for obtaining standard errors. Finally, we discuss limitations of our proposed extended Bayesian EAP-based approach.

4 citations

Journal ArticleDOI
09 Jan 2023-Psych
TL;DR: In this paper , the authors use LavPredict to compute factor score estimates for organizational research where team leaders are evaluated by their employees, and discuss these issues from a measurement perspective.
Abstract: To compute factor score estimates, lavaan version 0.6–12 offers the function lavPredict( ) that can not only be applied in single-level modeling but also in multilevel modeling, where characteristics of higher-level units such as working environments or team leaders are often assessed by ratings of employees. Surprisingly, the function provides results that deviate from the expected ones. Specifically, whereas the function yields correct EAP estimates of higher-level factors, the ML estimates are counterintuitive and possibly incorrect. Moreover, the function does not provide the expected standard errors. I illustrate these issues using an example from organizational research where team leaders are evaluated by their employees, and I discuss these issues from a measurement perspective.

2 citations

Journal ArticleDOI
TL;DR: In this article , the authors describe how developmental researchers can implement, test and interpret interaction effects in random-intercept cross-lagged panel models using an empirical example from developmental psychopathology research using data from the United Kingdom-based Millennium Cohort Study within a Bayesian Structural Equation Modelling framework.
Abstract: Abstract Random-Intercept Cross-Lagged Panel Models allow for the decomposition of measurements into between- and within-person components and have hence become popular for testing developmental hypotheses. Here, we describe how developmental researchers can implement, test and interpret interaction effects in such models using an empirical example from developmental psychopathology research. We illustrate the analysis of Within × Within and Between × Within interactions utilising data from the United Kingdom-based Millennium Cohort Study within a Bayesian Structural Equation Modelling framework. We provide annotated Mplus code, allowing users to isolate, estimate and interpret the complexities of within-person and between-person dynamics as they unfold over time.

1 citations

Journal ArticleDOI
TL;DR: In this paper , two types of many-faceted (MF)-IRT models are developed to account for dynamic rater effects, which assume that rater severity can drift systematically or stochastically.
Abstract: Rater effects are commonly observed in rater-mediated assessments. By using item response theory (IRT) modeling, raters can be treated as independent factors that function as instruments for measuring ratees. Most rater effects are static and can be addressed appropriately within an IRT framework, and a few models have been developed for dynamic rater effects. Operational rating projects often require human raters to continuously and repeatedly score ratees over a certain period, imposing a burden on the cognitive processing abilities and attention spans of raters that stems from judgment fatigue and thus affects the rating quality observed during the rating period. As a result, ratees’ scores may be influenced by the order in which they are graded by raters in a rating sequence, and the rating order effect should be considered in new IRT models. In this study, two types of many-faceted (MF)-IRT models are developed to account for such dynamic rater effects, which assume that rater severity can drift systematically or stochastically. The results obtained from two simulation studies indicate that the parameters of the newly developed models can be estimated satisfactorily using Bayesian estimation and that disregarding the rating order effect produces biased model structure and ratee proficiency parameter estimations. A creativity assessment is outlined to demonstrate the application of the new models and to investigate the consequences of failing to detect the possible rating order effect in a real rater-mediated evaluation.
Journal ArticleDOI
02 Mar 2022-Psych
TL;DR: This research presents a meta-modelling framework that automates the very labor-intensive and therefore time-heavy and expensive and therefore expensive and expensive process of manually cataloging and cataloging individual cells.
Abstract: Statistical software in psychometrics has made tremendous progress in providing open source solutions (e [...]
References
More filters
Journal ArticleDOI
TL;DR: The focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normal- ity after transformations and marginalization, and the results are derived as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations.
Abstract: The Gibbs sampler, the algorithm of Metropolis and similar iterative simulation methods are potentially very helpful for summarizing multivariate distributions. Used naively, however, iterative simulation can give misleading answers. Our methods are simple and generally applicable to the output of any iterative simulation; they are designed for researchers primarily interested in the science underlying the data and models they are analyzing, rather than for researchers interested in the probability theory underlying the iterative simulations themselves. Our recommended strategy is to use several independent sequences, with starting points sampled from an overdispersed distribution. At each step of the iterative simulation, we obtain, for each univariate estimand of interest, a distributional estimate and an estimate of how much sharper the distributional estimate might become if the simulations were continued indefinitely. Because our focus is on applied inference for Bayesian posterior distributions in real problems, which often tend toward normality after transformations and marginalization, we derive our results as normal-theory approximations to exact Bayesian inference, conditional on the observed simulations. The methods are illustrated on a random-effects mixture model applied to experimental measurements of reaction times of normal and schizophrenic patients.

13,884 citations

Journal ArticleDOI
TL;DR: In this paper, a folded-noncentral-$t$ family of conditionally conjugate priors for hierarchical standard deviation parameters is proposed, and weakly informative priors in this family are considered.
Abstract: Various noninformative prior distributions have been suggested for scale parameters in hierarchical models. We construct a new folded-noncentral-$t$ family of conditionally conjugate priors for hierarchical standard deviation parameters, and then consider noninformative and weakly informative priors in this family. We use an example to illustrate serious problems with the inverse-gamma family of "noninformative" prior distributions. We suggest instead to use a uniform prior on the hierarchical standard deviation, using the half-$t$ family when the number of groups is small and in other settings where a weakly informative prior is desired. We also illustrate the use of the half-$t$ family for hierarchical modeling of multiple variance parameters such as arise in the analysis of variance.

3,012 citations

Journal ArticleDOI
TL;DR: The case is made for basing all inference on one long run of the Markov chain and estimating the Monte Carlo error by standard nonparametric methods well-known in the time-series and operations research literature.
Abstract: Markov chain Monte Carlo using the Metropolis-Hastings algorithm is a general method for the simulation of stochastic processes having probability densities known up to a constant of proportionality. Despite recent advances in its theory, the practice has remained controversial. This article makes the case for basing all inference on one long run of the Markov chain and estimating the Monte Carlo error by standard nonparametric methods well-known in the time-series and operations research literature. In passing it touches on the Kipnis-Varadhan central limit theorem for reversible Markov chains, on some new variance estimators, on judging the relative efficiency of competing Monte Carlo schemes, on methods for constructing more rapidly mixing Markov chains and on diagnostics for Markov chain Monte Carlo.

1,912 citations

Journal ArticleDOI
TL;DR: In this paper, the authors demonstrate how substantive researchers can use a Monte Carlo study to decide on sample size and determine power, using two models, a confirmatory factor analysis (CFA) model and a growth model.
Abstract: A common question asked by researchers is, "What sample size do I need for my study?" Over the years, several rules of thumb have been proposed. In reality there is no rule of thumb that applies to all situations. The sample size needed for a study depends on many factors, including the size of the model, distribution of the variables, amount of missing data, reliability of the variables, and strength of the relations among the variables. The purpose of this article is to demonstrate how substantive researchers can use a Monte Carlo study to decide on sample size and determine power. Two models are used as examples, a confirmatory factor analysis (CFA) model and a growth model. The analyses are carried out using the Mplus program (Muthen& Muthen 1998).

1,728 citations

Journal ArticleDOI
TL;DR: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis, which replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors.
Abstract: This article proposes a new approach to factor analysis and structural equation modeling using Bayesian analysis. The new approach replaces parameter specifications of exact zeros with approximate zeros based on informative, small-variance priors. It is argued that this produces an analysis that better reflects substantive theories. The proposed Bayesian approach is particularly beneficial in applications where parameters are added to a conventional model such that a nonidentified model is obtained if maximum-likelihood estimation is applied. This approach is useful for measurement aspects of latent variable modeling, such as with confirmatory factor analysis, and the measurement part of structural equation modeling. Two application areas are studied, cross-loadings and residual correlations in confirmatory factor analysis. An example using a full structural equation model is also presented, showing an efficient way to find model misspecification. The approach encompasses 3 elements: model testing using posterior predictive checking, model estimation, and model modification. Monte Carlo simulations and real data are analyzed using Mplus. The real-data analyses use data from Holzinger and Swineford's (1939) classic mental abilities study, Big Five personality factor data from a British survey, and science achievement data from the National Educational Longitudinal Study of 1988.

1,045 citations