scispace - formally typeset
D

Dubravka Svetina

Researcher at Indiana University

Publications -  39
Citations -  1289

Dubravka Svetina is an academic researcher from Indiana University. The author has contributed to research in topics: Item response theory & Measurement invariance. The author has an hindex of 13, co-authored 38 publications receiving 908 citations. Previous affiliations of Dubravka Svetina include Arizona State University.

Papers
More filters
Journal ArticleDOI

Assessing the Hypothesis of Measurement Invariance in the Context of Large-Scale International Surveys.

TL;DR: This article used data from one large-scale survey as a basis for examining the extent to which typical fit measures used in multiple-group confirmatory factor analysis are suitable for detecting measurement invariance in a large scale survey context.
Journal ArticleDOI

Evaluation of parallel analysis methods for determining the number of factors

TL;DR: In this article, the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis with principal axis factoring to identify the number of underlying factors was compared.
Journal ArticleDOI

Multiple-Group Invariance with Categorical Outcomes Using Updated Guidelines: An Illustration Using Mplus and the lavaan/semTools Packages

TL;DR: The current state of categorical ME/I is described and an up-to-date method for model identification and invariance testing is demonstrated and exemplified via multiple-group confirmatory factor analysis using Mplus and the lavaan and semTools packages in R.

Evaluation of parallel analysis methods for determining the number of factors

TL;DR: Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallelAnalysis using principal axis factoring (PA -PAF) to identify the number of underlying factors.
Journal ArticleDOI

Measurement Invariance in International Surveys: Categorical Indicators and Fit Measure Performance

TL;DR: In this paper, the performance of several fit measures when data are assumed to have an ordered categorical, rather than the typically assumed continuous, scale was evaluated using a simulation study based on empirical results, concluding that classic measures and associated criteria were either unsuitable in a large-group and varied sample-size context or should be adjusted.