scispace - formally typeset
Search or ask a question
Author

Dubravka Svetina

Other affiliations: Arizona State University
Bio: Dubravka Svetina is an academic researcher from Indiana University. The author has contributed to research in topics: Item response theory & Measurement invariance. The author has an hindex of 13, co-authored 38 publications receiving 908 citations. Previous affiliations of Dubravka Svetina include Arizona State University.

Papers
More filters
Journal ArticleDOI
TL;DR: This article used data from one large-scale survey as a basis for examining the extent to which typical fit measures used in multiple-group confirmatory factor analysis are suitable for detecting measurement invariance in a large scale survey context.
Abstract: In the field of international educational surveys, equivalence of achievement scale scores across countries has received substantial attention in the academic literature; however, only a relatively recent emphasis on scale score equivalence in nonachievement education surveys has emerged. Given the current state of research in multiple-group models, findings regarding these recent measurement invariance investigations were supported with research that was limited in scope to few groups and relatively small sample sizes. To that end, this study uses data from one large-scale survey as a basis for examining the extent to which typical fit measures used in multiple-group confirmatory factor analysis are suitable for detecting measurement invariance in a large-scale survey context. Using measures validated in a smaller scale context and an empirically grounded simulation study, our findings indicate that many typical measures and associated criteria are either unsuitable in a large group and varied sample-siz...

335 citations

Journal ArticleDOI
TL;DR: In this article, the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis with principal axis factoring to identify the number of underlying factors was compared.
Abstract: Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria were examined. The 95th percentile criterion was preferable for assessing the first eigenvalue using either extraction method. In assessing subsequent eigenvalues, PA-PCA tended to perform as well as or better than PA-PAF for models with one factor or multiple minimally correlated factors; the relative performance of the mean eigenvalue and the 95th percentile eigenvalue criteria depended on the number of variables per factor. PA-PAF using the mean eigenvalue criterion generally performed best if factors were more than minimally correlated or if one or more strong general factors as well as group factors were present.

159 citations

Journal ArticleDOI
TL;DR: The current state of categorical ME/I is described and an up-to-date method for model identification and invariance testing is demonstrated and exemplified via multiple-group confirmatory factor analysis using Mplus and the lavaan and semTools packages in R.
Abstract: Meaningful comparisons of means or relationships between latent constructs across groups require evidence that measurement is equivalent across the studied groups– a property known as measurement e...

151 citations

01 Jan 2009
TL;DR: Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallelAnalysis using principal axis factoring (PA -PAF) to identify the number of underlying factors.
Abstract: Population and sample simulation approaches were used to compare the performance of parallel analysis using principal component analysis (PA-PCA) and parallel analysis using principal axis factoring (PA-PAF) to identify the number of underlying factors. Additionally, the accuracies of the mean eigenvalue and the 95th percentile eigenvalue criteria were examined. The 95th percentile criterion was preferable for assessing the first eigenvalue using either extraction method. In assessing subsequent eigenvalues, PA-PCA tended to perform as well as or better than PA-PAF for models with one factor or multiple minimally correlated factors; the relative performance of the mean eigenvalue and the 95th percentile eigenvalue criteria depended on the number of variables per factor. PA-PAF using the mean eigenvalue criterion generally performed best if factors were more than minimally correlated or if one or more strong general factors as well as group factors were present.

148 citations

Journal ArticleDOI
TL;DR: In this paper, the performance of several fit measures when data are assumed to have an ordered categorical, rather than the typically assumed continuous, scale was evaluated using a simulation study based on empirical results, concluding that classic measures and associated criteria were either unsuitable in a large-group and varied sample-size context or should be adjusted.
Abstract: In spite of the challenges inherent in making dozens of comparisons across heterogeneous populations, a relatively recent interest in scale-score equivalence for non-achievement measures in an international context has emerged. Until recently, operational procedures for establishing measurement invariance using multiple-groups analyses were typically supported with research that was limited in scope to few groups and relatively small sample sizes. Recent research that examined situations more representative of international surveys recommended some revisions to typically used fit measures. The current study extends this research and evaluates the performance of several fit measures when data are assumed to have an ordered categorical, rather than the typically assumed continuous, scale. Using a simulation study based on empirical results, findings indicated that classic measures and associated criteria were either unsuitable in a large-group and varied sample-size context or should be adjusted, pa...

97 citations


Cited by
More filters
01 Jan 2006
TL;DR: For example, Standardi pružaju okvir koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima.
Abstract: Pedagosko i psiholosko testiranje i procjenjivanje spadaju među najvažnije doprinose znanosti o ponasanju nasem drustvu i pružaju temeljna i znacajna poboljsanja u odnosu na ranije postupke. Iako se ne može ustvrditi da su svi testovi dovoljno usavrseni niti da su sva testiranja razborita i korisna, postoji velika kolicina informacija koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima. Pravilna upotreba testova može dovesti do boljih odluka o pojedincima i programima nego sto bi to bio slucaj bez njihovog koristenja, a također i ukazati na put za siri i pravedniji pristup obrazovanju i zaposljavanju. Međutim, losa upotreba testova može dovesti do zamjetne stete nanesene ispitanicima i drugim sudionicima u procesu donosenja odluka na temelju testovnih podataka. Cilj Standarda je promoviranje kvalitetne i eticne upotrebe testova te uspostavljanje osnovice za ocjenu kvalitete postupaka testiranja. Svrha objavljivanja Standarda je uspostavljanje kriterija za evaluaciju testova, provedbe testiranja i posljedica upotrebe testova. Iako bi evaluacija prikladnosti testa ili njegove primjene trebala ovisiti prvenstveno o strucnim misljenjima, Standardi pružaju okvir koji osigurava obuhvacanje svih relevantnih pitanja. Bilo bi poželjno da svi autori, sponzori, nakladnici i korisnici profesionalnih testova usvoje Standarde te da poticu druge da ih također prihvate.

3,905 citations

Journal ArticleDOI
TL;DR: The state of measurement invariance testing and reporting is surveyed, the results of a literature review of studies that tested invariance are details, and Implications for the future of measurement symmetry testing, reporting, and best practices are discussed.

1,526 citations

01 Jan 2013
TL;DR: In this article, Aviles et al. present a review of the state of the art in the field of test data analysis, which includes the following institutions: Stanford University, Stanford Graduate School of Education, Stanford University and the University of Southern California.
Abstract: EDITORIAL BOARD Robert Davison Aviles, Bradley University Harley E. Baker, California State University–Channel Islands Jean-Guy Blais, Universite de Montreal, Canada Catherine Y. Chang, Georgia State University Robert C. Chope, San Francisco State University Kevin O. Cokley, University of Missouri, Columbia Patricia B. Elmore, Southern Illinois University Shawn Fitzgerald, Kent State University John J. Fremer, Educational Testing Service Vicente Ponsoda Gil, Universidad Autonoma de Madrid, Spain Jo-Ida C. Hansen, University of Minnesota Charles C. Healy, University of California at Los Angeles Robin K. Henson, University of North Texas Flaviu Adrian Hodis, Victoria University of Wellington, New Zealand Janet K. Holt, Northern Illinois University David A. Jepsen, The University of Iowa Gregory Arief D. Liem, National Institute of Education, Nanyang Technological University Wei-Cheng J. Mau, Wichita State University Larry Maucieri, Governors State College Patricia Jo McDivitt, Data Recognition Corporation Peter F. Merenda, University of Rhode Island Matthew J. Miller, University of Maryland Ralph O. Mueller, University of Hartford Jane E. Myers, The University of North Carolina at Greensboro Philip D. Parker, University of Western Sydney Ralph L. Piedmont, Loyola College in Maryland Alex L. Pieterse, University at Albany, SUNY Nicholas J. Ruiz, Winona State University James P. Sampson, Jr., Florida State University William D. Schafer, University of Maryland, College Park William E. Sedlacek, University of Maryland, College Park Marie F. Shoffner, University of Virginia Len Sperry, Florida Atlantic University Kevin Stoltz, University of Mississippi Jody L. Swartz-Kulstad, Seton Hall University Bruce Thompson, Texas A&M University Timothy R. Vansickle, Minnesota Department of Education Steve Vensel, Palm Beach Atlantic University Dan Williamson, Lindsey Wilson College F. Robert Wilson, University of Cincinnati

1,306 citations