scispace - formally typeset
Search or ask a question
JournalISSN: 1529-7713

Journal of applied measurement 

About: Journal of applied measurement is an academic journal. The journal publishes majorly in the area(s): Rasch model & Polytomous Rasch model. It has an ISSN identifier of 1529-7713. Over the lifetime, 464 publications have been published receiving 9932 citations.


Papers
More filters
Journal Article
TL;DR: Eight guidelines are suggested to aid the analyst in optimizing the manner in which rating scales categories cooperate in order to improve the utility of the resultant measures.
Abstract: Rating scales are employed as a means of extracting more information out of an item than would be obtained from a mere "yes/no", "right/wrong" or other dichotomy. But does this additional information increase measurement accuracy and precision? Eight guidelines are suggested to aid the analyst in optimizing the manner in which rating scales categories cooperate in order to improve the utility of the resultant measures. Though these guidelines are presented within the context of Rasch analysis, they reflect aspects of rating scale functioning which impact all methods of analysis. The guidelines feature rating-scale-based data such as category frequency, ordering, rating-to-measure inferential coherence, and the quality of the scale from measurement and statistical perspectives. The manner in which the guidelines prompt recategorization or reconceptualization of the rating scale is indicated. Utilization of the guidelines is illustrated through their application to two published data sets.

1,174 citations

Journal Article
TL;DR: The purpose of this research is to extend the work of Smith (1992, 1996) and Smith and Miao (1991, 1994) in comparing item fit statistics and principal component analysis as tools for assessing the unidimensionality requirement of Rasch models and demonstrate the potential impact of multidimensionsality on norm and criterion-reference person measure interpretations.
Abstract: The purpose of this research is twofold. First is to extend the work of Smith (1992, 1996) and Smith and Miao (1991, 1994) in comparing item fit statistics and principal component analysis as tools for assessing the unidimensionality requirement of Rasch models. Second is to demonstrate methods to explore how violations of the unidimensionality requirement influence person measurement. For the first study, rating scale data were simulated to represent varying degrees of multidimensionality and the proportion of items contributing to each component. The second study used responses to a 24 item Attention Deficit Hyperactivity Disorder scale obtained from 317 college undergraduates. The simulation study reveals both an iterative item fit approach and principal component analysis of standardized residuals are effective in detecting items simulated to contribute to multidimensionality. The methods presented in Study 2 demonstrate the potential impact of multidimensionality on norm and criterion-reference person measure interpretations. The results provide researchers with quantitative information to help assist with the qualitative judgment as to whether the impact of multidimensionality is severe enough to warrant removing items from the analysis.

889 citations

Journal Article
TL;DR: A catalog of rater effects is presented, introducing effects that researchers have studied over the last three-quarters of a century in order to help readers gain a historical perspective on how those effects have been conceptualized.
Abstract: The purpose of this two-part paper is to introduce researchers to the many-facet Rasch measurement (MFRM) approach for detecting and measuring rater effects. In Part II of the paper, researchers will learn how to use the Facets (Linacre, 2001) computer program to study five effects: leniency/severity, central tendency, randomness, halo, and differential leniency/severity. As we introduce each effect, we operationally define it within the context of a MFRM approach, specify the particular measurement model(s) needed to detect it, identify group- and individual-level statistical indicators of the effect, and show output from a Facets analysis, pinpointing the various indicators and explaining how to interpret each one. At the close of the paper, we describe other statistical procedures that have been used to detect and measure rater effects to help researchers become aware of important and influential literature on the topic and to gain an appreciation for the diversity of psychometric perspectives that researchers bring to bear on their work. Finally, we consider future directions for research in the detection and measurement of rater effects.

385 citations

Journal Article
TL;DR: It is identified how analyses, especially those conducted within a Rasch measurement framework, can be used to provide evidence to support validity arguments that are founded during the instrument development process.
Abstract: Accumulation of validity evidence is an important part of the instrument development process. In Part I of a two-part series, we provided an overview of validity concepts and described how instrument development efforts can be conducted to facilitate the development of validity arguments. In this, Part II of the series, we identify how analyses, especially those conducted within a Rasch measurement framework, can be used to provide evidence to support validity arguments that are founded during the instrument development process.

324 citations

Journal Article
TL;DR: This article will outline methods in Rasch measurement that are used to gather evidence for reliability and validity and attempt to articulate how these methods may be linked with the current views of reliability andvalidity.
Abstract: In an era of high stakes testing and evaluation in education, psychology, and health care, there is need for rigorous methods and standards for obtaining evidence of the reliability of measures and validity of inferences. Messick (1989, 1995), the Standard for Educational and Psychological Testing (American Psychological Association, American Educational Research Association, and National Council on Measurement in Education, 1999), and the Medical Outcomes Trust (1995), among others, have described methods that may be used to gather evidence for reliability and validity, but ignored the potential role Rasch measurement may contribute to this process. This article will outline methods in Rasch measurement that are used to gather evidence for reliability and validity and attempt to articulate how these methods may be linked with the current views of reliability and validity.

311 citations

Network Information
Related Journals (5)
Educational and Psychological Measurement
7K papers, 272.8K citations
82% related
Structural Equation Modeling
1.2K papers, 205K citations
80% related
Psychological Methods
1K papers, 230.3K citations
79% related
Teaching and Teacher Education
4K papers, 267.2K citations
77% related
Educational Researcher
2.7K papers, 322.6K citations
77% related
Performance
Metrics
No. of papers from the Journal in previous years
YearPapers
20206
201912
201816
201717
201620
201524