scispace - formally typeset
Search or ask a question

Showing papers by "Susan E. Embretson published in 2021"


Book ChapterDOI
01 Jan 2021
TL;DR: In this paper, the relationship of item log response times to item differences in difficulty and content was examined within subjects, and the results indicate that existing models may be differentially effective depending on examinees' predominant strategy in item solving.
Abstract: A wide variety of models have been developed for item response times. The models vary in both primary purpose and underlying assumptions. As noted by van der Linden (2016), several item response models assume response time and response accuracy are highly dependent processes. However, the nature of this assumed relationship varies substantially between models; that is, greater accuracy may be associated with either increased or decreased response time. In addition to these conflicting assumptions, examinees may differ in their relative response times across items. In the current study, the relationship of item log response times to item differences in difficulty and content was examined within subjects. Although on the item level, mean item log response time was positively correlated with difficulty, a broad distribution of these correlations was found within subjects, ranging from positive to negative. These results indicate that existing models may be differentially effective depending on examinees’ predominant strategy in item solving.

2 citations


Journal ArticleDOI
TL;DR: In this article, the authors examined varying approaches to using the year-end test for diagnosis of mastery at the item level and at the node level using parameters from varying IRT models.
Abstract: An important feature of learning maps, such as Dynamic Learning Maps and Enhanced Learning Maps, is their ability to accommodate nation-wide specifications of standards, such as the Common Core State Standards (CCSS), within the map nodes along with relevant instruction. These features are especially useful for remedial instruction, given that accurate diagnosis is available. The year-end achievement tests are potentially useful in this regard. Unfortunately, the current use of total score or area sub-scores are either not sufficiently precise or not sufficiently reliable to diagnose mastery at the node level especially when students vary in their patterns of mastery. The current study examines varying approaches to using the year-end test for diagnosis. Prediction at the item level was obtained using parameters from varying IRT models. The results support using mixture class IRT models predicting mastery in which either items or node scores vary in difficulty for students in different latent classes. Not only did the mixture models fit better, but trait score reliability was maintained for the predictions of node mastery.