scispace - formally typeset
Search or ask a question
Author

Raoul P. P. P. Grasman

Bio: Raoul P. P. P. Grasman is an academic researcher from University of Amsterdam. The author has contributed to research in topics: Catastrophe theory & Type I and type II errors. The author has an hindex of 26, co-authored 68 publications receiving 4514 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: This article presents an alternative model that separates the within-person process from stable between-person differences through the inclusion of random intercepts, and discusses how this model is related to existing structural equation models that include cross-lagged relationships.
Abstract: The cross-lagged panel model (CLPM) is believed by many to overcome the problems associated with the use of cross-lagged correlations as a way to study causal influences in longitudinal panel data. The current article, however, shows that if stability of constructs is to some extent of a trait-like, time-invariant nature, the autoregressive relationships of the CLPM fail to adequately account for this. As a result, the lagged parameters that are obtained with the CLPM do not represent the actual within-person relationships over time, and this may lead to erroneous conclusions regarding the presence, predominance, and sign of causal influences. In this article we present an alternative model that separates the within-person process from stable between-person differences through the inclusion of random intercepts, and we discuss how this model is related to existing structural equation models that include cross-lagged relationships. We derive the analytical relationship between the cross-lagged parameters from the CLPM and the alternative model, and use simulations to demonstrate the spurious results that may arise when using the CLPM to analyze data that include stable, trait-like individual differences. We also present a modeling strategy to avoid this pitfall and illustrate this using an empirical data set. The implications for both existing and future cross-lagged panel research are discussed.

1,633 citations

Journal ArticleDOI
TL;DR: A new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role, and it is shown that thepositive manifold emerges purely by positive beneficial interactions between cognitive processes during development.
Abstract: Scores on cognitive tasks used in intelligence tests correlate positively with each other, that is, they display a positive manifold of correlations. The positive manifold is often explained by positing a dominant latent variable, the g factor, associated with a single quantitative cognitive or biological process or capacity. In this article, a new explanation of the positive manifold based on a dynamical model is proposed, in which reciprocal causation or mutualism plays a central role. It is shown that the positive manifold emerges purely by positive beneficial interactions between cognitive processes during development. A single underlying g factor plays no role in the model. The model offers explanations of important findings in intelligence research, such as the hierarchical factor structure of intelligence, the low predictability of intelligence from early childhood performance, the integration/differentiation effect, the increase in heritability of g, and the Jensen effect, and is consistent with current explanations of the Flynn effect.

685 citations

Journal ArticleDOI
TL;DR: Here it is drawn attention to the Savage-Dickey density ratio method, a method that can be used to compute the result of a Bayesian hypothesis test for nested models and under certain plausible restrictions on the parameter priors.

499 citations

Journal ArticleDOI
TL;DR: This work studied the performance of the EZ-diffusion model in terms of parameter recovery and robustness against misspecification by using Monte Carlo simulations and the model was also applied to a real-world data set.
Abstract: The EZ-diffusion model for two-choice response time tasks takes mean response time, the variance of response time, and response accuracy as inputs. The model transforms these data via three simple equations to produce unique values for the quality of information, response conservativeness, and nondecision time. This transformation of observed data in terms of unobserved variables addresses the speed—accuracy trade-off and allows an unambiguous quantification of performance differences in two-choice response time tasks. The EZ-diffusion model can be applied to data-sparse situations to facilitate individual subject analysis. We studied the performance of the EZ-diffusion model in terms of parameter recovery and robustness against misspecification by using Monte Carlo simulations. The EZ model was also applied to a real-world data set.

418 citations

Journal ArticleDOI
TL;DR: This work explains the multiple-comparison problem and demonstrates that researchers almost never correct for it, and describes four remedies: the omnibus F test, control of the familywise error rate, controls of the false discovery rate, and preregistration of the hypotheses.
Abstract: Many psychologists do not realize that exploratory use of the popular multiway analysis of variance harbors a multiple-comparison problem. In the case of two factors, three separate null hypotheses are subject to test (i.e., two main effects and one interaction). Consequently, the probability of at least one Type I error (if all null hypotheses are true) is 14 % rather than 5 %, if the three tests are independent. We explain the multiple-comparison problem and demonstrate that researchers almost never correct for it. To mitigate the problem, we describe four remedies: the omnibus F test, control of the familywise error rate, control of the false discovery rate, and preregistration of the hypotheses.

312 citations


Cited by
More filters
Posted Content
TL;DR: In this paper, the authors provide a unified and comprehensive theory of structural time series models, including a detailed treatment of the Kalman filter for modeling economic and social time series, and address the special problems which the treatment of such series poses.
Abstract: In this book, Andrew Harvey sets out to provide a unified and comprehensive theory of structural time series models. Unlike the traditional ARIMA models, structural time series models consist explicitly of unobserved components, such as trends and seasonals, which have a direct interpretation. As a result the model selection methodology associated with structural models is much closer to econometric methodology. The link with econometrics is made even closer by the natural way in which the models can be extended to include explanatory variables and to cope with multivariate time series. From the technical point of view, state space models and the Kalman filter play a key role in the statistical treatment of structural time series models. The book includes a detailed treatment of the Kalman filter. This technique was originally developed in control engineering, but is becoming increasingly important in fields such as economics and operations research. This book is concerned primarily with modelling economic and social time series, and with addressing the special problems which the treatment of such series poses. The properties of the models and the methodological techniques used to select them are illustrated with various applications. These range from the modellling of trends and cycles in US macroeconomic time series to to an evaluation of the effects of seat belt legislation in the UK.

4,252 citations

Journal ArticleDOI
TL;DR: A survey of factor analytic studies of human cognitive abilities can be found in this paper, with a focus on the role of factor analysis in human cognitive ability evaluation and cognition. But this survey is limited.
Abstract: (1998). Human cognitive abilities: A survey of factor analytic studies. Gifted and Talented International: Vol. 13, No. 2, pp. 97-98.

2,388 citations

Journal ArticleDOI
TL;DR: The diffusion decision model is reviewed to show how it translates behavioral data accuracy, mean response times, and response time distributions into components of cognitive processing, including research in the domains of aging and neurophysiology.
Abstract: The diffusion decision model allows detailed explanations of behavior in two-choice discrimination tasks. In this article, the model is reviewed to show how it translates behavioral data—accuracy, mean response times, and response time distributions—into components of cognitive processing. Three experiments are used to illustrate experimental manipulations of three components: stimulus difficulty affects the quality of information on which a decision is based; instructions emphasizing either speed or accuracy affect the criterial amounts of information that a subject requires before initiating a response; and the relative proportions of the two stimuli affect biases in drift rate and starting point. The experiments also illustrate the strong constraints that ensure the model is empirically testable and potentially falsifiable. The broad range of applications of the model is also reviewed, including research in the domains of aging and neurophysiology.

2,318 citations

01 Jan 2016
TL;DR: This application applied longitudinal data analysis modeling change and event occurrence will help people to enjoy a good book with a cup of coffee in the afternoon instead of facing with some infectious virus inside their computer.
Abstract: Thank you very much for downloading applied longitudinal data analysis modeling change and event occurrence. As you may know, people have look hundreds times for their favorite novels like this applied longitudinal data analysis modeling change and event occurrence, but end up in malicious downloads. Rather than enjoying a good book with a cup of coffee in the afternoon, instead they are facing with some infectious virus inside their computer.

2,102 citations

Journal ArticleDOI
TL;DR: This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs, and systematically varied key model properties, including number of indicators and factors, magnitude of factor loadings and path coefficients, and amount of missing data.
Abstract: Determining sample size requirements for structural equation modeling (SEM) is a challenge often faced by investigators, peer reviewers, and grant writers Recent years have seen a large increase in SEMs in the behavioral science literature, but consideration of sample size requirements for applied SEMs often relies on outdated rules-of-thumb This study used Monte Carlo data simulation techniques to evaluate sample size requirements for common applied SEMs Across a series of simulations, we systematically varied key model properties, including number of indicators and factors, magnitude of factor loadings and path coefficients, and amount of missing data We investigated how changes in these parameters affected sample size requirements with respect to statistical power, bias in the parameter estimates, and overall solution propriety Results revealed a range of sample size requirements (ie, from 30 to 460 cases), meaningful patterns of association between parameters and sample size, and highlight the limitations of commonly cited rules-of-thumb The broad "lessons learned" for determining SEM sample size requirements are discussed

1,837 citations