scispace - formally typeset
Search or ask a question
JournalISSN: 2011-2084

International journal of psychological research 

Universidad de San Buenaventura
About: International journal of psychological research is an academic journal published by Universidad de San Buenaventura. The journal publishes majorly in the area(s): Medicine & Population. It has an ISSN identifier of 2011-2084. It is also open access. Over the lifetime, 348 publications have been published receiving 6017 citations.


Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the importance of equivalence in psychological research is discussed, and the main theoretical and methodological issues regarding measurement invariance within the framework of confirmatory factor analysis are discussed.
Abstract: Researchers often compare groups of individuals on psychological variables. When comparing groups an assumption is made that the instrument measures the same psychological construct in all groups. If this assumption holds, the comparisons are valid and differences/similarities between groups can be meaningfully interpreted. If this assumption does not hold, comparisons and interpretations are not fully meaningful. The establishment of measurement invariance is a prerequisite for meaningful comparisons across groups. This paper first reviews the importance of equivalence in psychological research, and then the main theoretical and methodological issues regarding measurement invariance within the framework of confirmatory factor analysis. A step-by-step empirical example of measurement invariance testing is provided along with syntax examples for fitting such models in LISREL.

1,142 citations

Journal ArticleDOI
TL;DR: In this article, a guideline for conducting factor analysis, a technique used to estimate the population-level factor structure underlying the given sample data, is provided, along with suggestions for how to carry out preliminary procedures, exploratory and confirmatory factor analyses (EFA and CFA) with SPSS and LISREL syntax examples.
Abstract: The current article provides a guideline for conducting factor analysis, a technique used to estimate the population-level factor structure underlying the given sample data. First, the distinction between exploratory and confirmatory factor analyses (EFA and CFA) is briefly discussed; along with this discussion, the notion of principal component analysis and why it does not provide a valid substitute of factor analysis is noted. Second, a step-by-step walk-through of conducting factor analysis is illustrated; through these walk-through instructions, various decisions that need to be made in factor analysis are discussed and recommendations provided. Specifically, suggestions for how to carry out preliminary procedures, EFA, and CFA are provided with SPSS and LISREL syntax examples. Finally, some critical issues concerning the appropriate (and not-so-appropriate) use of factor analysis are discussed along with the discussion of recommended practices.

1,079 citations

Journal ArticleDOI
TL;DR: In this paper, the authors argue for empirical exibility with respect to the choice of transformation for the RTs, and advocate minimal a-priori data trimming, combined with model criticism.
Abstract: Reaction times (RTs) are an important source of information in experimental psychology. Classical methodological considerations pertaining to the statistical analysis of RT data are optimized for analyses of aggregated data, based on subject or item means (c.f., Forster & Dickinson, 1976). Mixed-effects modeling (see, e.g., Baayen, Davidson, & Bates, 2008) does not require prior aggregation and allows the researcher the more ambitious goal of predicting individual responses. Mixed-modeling calls for a reconsideration of the classical methodological strategies for analysing rts. In this study, we argue for empirical exibility with respect to the choice of transformation for the RTs. We advocate minimal a-priori data trimming, combined with model criticism. We also show how trial-to-trial, longitudinal dependencies between individual observations can be brought into the statistical model. These strategies are illustrated for a large dataset with a non-trivial random-effects structure. Special attention is paid to the evaluation of interactions involving fixed-effect factors that partition the levels sampled by random-effect factors.

772 citations

Journal ArticleDOI
TL;DR: In this paper, various techniques aimed at detecting potential outliers are reviewed and these techniques are subdivided into two classes, the ones regarding univariate data and those addressing multivariate data.
Abstract: Outliers are observations or measures that are suspicious because they are much smaller or much larger than the vast majority of the observations. These observations are problematic because they may not be caused by the mental process under scrutiny or may not reflect the ability under examination. The problem is that a few outliers is sometimes enough to distort the group results (by altering the mean performance, by increasing variability, etc.). In this paper, various techniques aimed at detecting potential outliers are reviewed. These techniques are subdivided into two classes, the ones regarding univariate data and those addressing multivariate data. Within these two classes, we consider the cases where the population distribution is known to be normal, the population is not normal but known, or the population is unknown. Recommendations will be put forward in each case.

494 citations

Journal ArticleDOI
TL;DR: A broad taxonomy of standardization methods for personality, organizational and cross-cultural psychology can be found in this paper, where the authors discuss when and how scores can be standardized and what statistical tests are available after the transformation.
Abstract: The term standardization has been used in a number of different ways in psychological research, mainly in relation to standardization of procedure, standardization of interpretation and standardization of scores. The current paper will discuss the standardization of scores in more detail. Standardization of scores is a common praxis in settings where researchers are concerned with different response styles, issues of faking or social desirability. In these contexts, scores are transformed to increase validity prior to data analysis. In this paper, we will outline a broad taxonomy of standardization methods, will discuss when and how scores can be standardized, and what statistical tests are available after the transformation. Simple step-by-step procedures and examples of syntax files for SPSS are provided. Applications for personality, organizational and cross-cultural psychology will be discussed. Limitations of these techniques are discussed, especially in terms of theoretical interpretation of the transformed scores and use of such scores with multivariate statistics.

111 citations

Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
202318
202226
202119
202020
201918
201815