scispace - formally typeset
Search or ask a question
Author

Daniel A. Sass

Bio: Daniel A. Sass is an academic researcher from University of Texas at San Antonio. The author has contributed to research in topics: Measurement invariance & Structural equation modeling. The author has an hindex of 20, co-authored 49 publications receiving 2403 citations. Previous affiliations of Daniel A. Sass include University of Wisconsin–Milwaukee & National Taipei University of Technology.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, the authors emphasize the importance of testing for measurement invariance (MI) and provide guidance when conducting these tests and discuss potential causes of non-invariant items, the difference between measurement bias and invariance, remedies for non-informal measures, and considerations associated with model estimation.
Abstract: Researchers commonly compare means and other statistics across groups with little concern for whether the measure possesses strong factorial invariance (ie, equal factor loadings and intercepts/thresholds) When this assumption is violated, inaccurate inferences associated with statistical and practical significance can occur This manuscript emphasizes the importance of testing for measurement invariance (MI) and provides guidance when conducting these tests Topics discussed are potential causes of noninvariant items, the difference between measurement bias and invariance, remedies for noninvariant measures, and considerations associated with model estimation Using a sample of 491 teachers, a demonstration is also provided that evaluates whether a newly constructed behavior and instructional management scale is invariant across elementary and middle school teachers Analyses revealed that the results differ slightly based on the estimation method utilized although these differences did not greatly in

440 citations

Journal ArticleDOI
TL;DR: The results suggest that depending on the rotation criterion selected and the complexity of the factor pattern matrix, the interpretation of the interfactor correlations and factor pattern loadings can vary substantially.
Abstract: Exploratory factor analysis (EFA) is a commonly used statistical technique for examining the relationships between variables (e.g., items) and the factors (e.g., latent traits) they depict. There are several decisions that must be made when using EFA, with one of the more important being choice of the rotation criterion. This selection can be arduous given the numerous rotation criteria available and the lack of research/literature that compares their function and utility. Historically, researchers have chosen rotation criteria based on whether or not factors are correlated and have failed to consider other important aspects of their data. This study reviews several rotation criteria, demonstrates how they may perform with different factor pattern structures, and highlights for researchers subtle but important differences between each rotation criterion. The choice of rotation criterion is critical to ensure researchers make informed decisions as to when different rotation criteria may or may not be appropriate. The results suggest that depending on the rotation criterion selected and the complexity of the factor pattern matrix, the interpretation of the interfactor correlations and factor pattern loadings can vary substantially. Implications and future directions are discussed.

279 citations

Journal ArticleDOI
TL;DR: In this paper, the authors compared estimation methods within a measurement invariance (MI) framework and determined if research conclusions using normal-theory maximum likelihood (ML) generalizes to the robust ML (MLR) and weighted least squares means and variance adjusted (WLSMV) estimators.
Abstract: A paucity of research has compared estimation methods within a measurement invariance (MI) framework and determined if research conclusions using normal-theory maximum likelihood (ML) generalizes to the robust ML (MLR) and weighted least squares means and variance adjusted (WLSMV) estimators. Using ordered categorical data, this simulation study aimed to address these queries by investigating 342 conditions. When testing for metric and scalar invariance, Δχ2 results revealed that Type I error rates varied across estimators (ML, MLR, and WLSMV) with symmetric and asymmetric data. The Δχ2 power varied substantially based on the estimator selected, type of noninvariant indicator, number of noninvariant indicators, and sample size. Although some the changes in approximate fit indexes (ΔAFI) are relatively sample size independent, researchers who use the ΔAFI with WLSMV should use caution, as these statistics do not perform well with misspecified models. As a supplemental analysis, our results evaluate and sug...

273 citations

Journal ArticleDOI
TL;DR: In this article, the authors address some commonly used strategies to model disattenuated structural coefficients between latent variables, and show that when a single unidimensional scale is used to represent a latent...
Abstract: Structural equation modeling allows several methods of estimating the disattenuated association between 2 or more latent variables (i.e., the measurement model). In one common approach, measurement models are specified using item parcels as indicators of latent constructs. Item parcels versus original items are often used as indicators in these contexts to avoid estimation problems or solve issues associated with multivariate normality of the data. One concern associated with the use of item parceling is that no single "correct" approach exists to construct the parcels. Despite the controversy associated with selecting the most appropriate parceling method, less is understood with regard to how these methods influence the structural or path coefficients. By means of simulated and empirical data, this article addresses some commonly used strategies to model disattenuated structural coefficients between latent variables. Results revealed that when a single unidimensional scale is used to represent a latent ...

227 citations

Journal ArticleDOI
TL;DR: Exploratory Factor Analysis (EFA) has been used in the social sciences to depict the relationships between variables/items and latent traits as mentioned in this paper, and it has been applied in many applications.
Abstract: Exploratory factor analysis (EFA) has long been used in the social sciences to depict the relationships between variables/items and latent traits. Researchers face many choices when using EFA, incl...

210 citations


Cited by
More filters
01 Jan 2006
TL;DR: For example, Standardi pružaju okvir koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima.
Abstract: Pedagosko i psiholosko testiranje i procjenjivanje spadaju među najvažnije doprinose znanosti o ponasanju nasem drustvu i pružaju temeljna i znacajna poboljsanja u odnosu na ranije postupke. Iako se ne može ustvrditi da su svi testovi dovoljno usavrseni niti da su sva testiranja razborita i korisna, postoji velika kolicina informacija koje ukazuju na ucinkovitost kvalitetnih instrumenata u onim situacijama u kojima je njihovo koristenje potkrijepljeno validacijskim podacima. Pravilna upotreba testova može dovesti do boljih odluka o pojedincima i programima nego sto bi to bio slucaj bez njihovog koristenja, a također i ukazati na put za siri i pravedniji pristup obrazovanju i zaposljavanju. Međutim, losa upotreba testova može dovesti do zamjetne stete nanesene ispitanicima i drugim sudionicima u procesu donosenja odluka na temelju testovnih podataka. Cilj Standarda je promoviranje kvalitetne i eticne upotrebe testova te uspostavljanje osnovice za ocjenu kvalitete postupaka testiranja. Svrha objavljivanja Standarda je uspostavljanje kriterija za evaluaciju testova, provedbe testiranja i posljedica upotrebe testova. Iako bi evaluacija prikladnosti testa ili njegove primjene trebala ovisiti prvenstveno o strucnim misljenjima, Standardi pružaju okvir koji osigurava obuhvacanje svih relevantnih pitanja. Bilo bi poželjno da svi autori, sponzori, nakladnici i korisnici profesionalnih testova usvoje Standarde te da poticu druge da ih također prihvate.

3,905 citations

Journal ArticleDOI
TL;DR: The effectiveness of a range of interventions that include diet or physical activity components, or both, designed to prevent obesity in children is evaluated to determine overall certainty of the evidence.
Abstract: The current evidence suggests that many diet and exercise interventions to prevent obesity in children are not effective in preventing weight gain, but can be effective in promoting a healthy diet and increased physical activity levels.Being very overweight (obese) can cause health, psychological and social problems for children. Children who are obese are more likely to have weight and health problems as adults. Programmes designed to prevent obesity focus on modifying one or more of the factors considered to promote obesity.This review included 22 studies that tested a variety of intervention programmes, which involved increased physical activity and dietary changes, singly or in combination. Participants were under 18 and living in Asia, South America, Europe or North America. There is not enough evidence from trials to prove that any one particular programme can prevent obesity in children, although comprehensive strategies to address dietary and physical activity change, together with psycho-social support and environmental change may help. There was a trend for newer interventions to involve their respective communities and to include evaluations.Future research might usefully assess changes made on behalf of entire populations, such as improvements in the types of foods available at schools and in the availability of safe places to run and play, and should assess health effects and costs over several years.The programmes in this review used different strategies to prevent obesity so direct comparisons were difficult. Also, the duration of the studies ranged from 12 weeks to three years, but most lasted less than a year.

2,464 citations

Posted Content
TL;DR: In this article, the authors present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation could be confounded; these methods include fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models.
Abstract: Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity – which includes omitted variables, omitted selection, simultaneity, common-method variance, and measurement error – renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation could be confounded; these methods include fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66% and up to 90% of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.

1,537 citations

01 Jan 2013
TL;DR: In this article, Aviles et al. present a review of the state of the art in the field of test data analysis, which includes the following institutions: Stanford University, Stanford Graduate School of Education, Stanford University and the University of Southern California.
Abstract: EDITORIAL BOARD Robert Davison Aviles, Bradley University Harley E. Baker, California State University–Channel Islands Jean-Guy Blais, Universite de Montreal, Canada Catherine Y. Chang, Georgia State University Robert C. Chope, San Francisco State University Kevin O. Cokley, University of Missouri, Columbia Patricia B. Elmore, Southern Illinois University Shawn Fitzgerald, Kent State University John J. Fremer, Educational Testing Service Vicente Ponsoda Gil, Universidad Autonoma de Madrid, Spain Jo-Ida C. Hansen, University of Minnesota Charles C. Healy, University of California at Los Angeles Robin K. Henson, University of North Texas Flaviu Adrian Hodis, Victoria University of Wellington, New Zealand Janet K. Holt, Northern Illinois University David A. Jepsen, The University of Iowa Gregory Arief D. Liem, National Institute of Education, Nanyang Technological University Wei-Cheng J. Mau, Wichita State University Larry Maucieri, Governors State College Patricia Jo McDivitt, Data Recognition Corporation Peter F. Merenda, University of Rhode Island Matthew J. Miller, University of Maryland Ralph O. Mueller, University of Hartford Jane E. Myers, The University of North Carolina at Greensboro Philip D. Parker, University of Western Sydney Ralph L. Piedmont, Loyola College in Maryland Alex L. Pieterse, University at Albany, SUNY Nicholas J. Ruiz, Winona State University James P. Sampson, Jr., Florida State University William D. Schafer, University of Maryland, College Park William E. Sedlacek, University of Maryland, College Park Marie F. Shoffner, University of Virginia Len Sperry, Florida Atlantic University Kevin Stoltz, University of Mississippi Jody L. Swartz-Kulstad, Seton Hall University Bruce Thompson, Texas A&M University Timothy R. Vansickle, Minnesota Department of Education Steve Vensel, Palm Beach Atlantic University Dan Williamson, Lindsey Wilson College F. Robert Wilson, University of Cincinnati

1,306 citations