Simulating Computer Adaptive Testing With the Mood and Anxiety Symptom Questionnaire
read more
Citations
Anxiety and working memory capacity: A meta-analysis and narrative review.
Statistical harmonization corrects site effects in functional connectivity measurements from multi-site fMRI data.
Behavioral and ERP measures of attentional bias to threat in the dot-probe task: poor reliability and lack of correlation with anxiety.
Quality of Life Research: An International Journal of Quality of Life Aspects of Treatment, Care and Rehabilitation
The Role of Implicit Theories in Mental Health Symptoms, Emotion Regulation, and Hypothetical Treatment Choices in College Students
References
R: A language and environment for statistical computing.
Statistical Power Analysis for the Behavioral Sciences
Diagnostic and Statistical Manual of Mental Disorders
Related Papers (5)
Applying computerized adaptive testing to the CES-D scale: A simulation study
Frequently Asked Questions (8)
Q2. What are the future works mentioned in the paper "Simulating computer adaptive testing with the mood and anxiety symptom questionnaire" ?
In future research, these requirements have to be investigated to assess the utility of the MASQ CAT in ROM. As a future line of research, the authors propose to investigate whether adding items with either milder or stronger content will result in more uniform test information because of the increased information in the extremes. For future CATs, researchers should decide on a minimally required correlation between the CAT and the full-scale estimates, which might be met by the proposed solutions. Two last lines of future research, which can be pursued with the MASQ, are the investigation of clinical cut points and their sensitivity and specificity for mood and/or anxiety diagnosis, and the factor structure for patients.
Q3. What are the two last lines of research that can be pursued with the MASQ?
Two last lines of future research, which can be pursued with the MASQ, are the investigation of clinical cut points and their sensitivity and specificity for mood and/or anxiety diagnosis, and the factor structure for patients.
Q4. Why do items containing DIF cause bias in latent trait scores?
Items containing DIF cause bias in latent trait scores because persons from different groups with the same latent trait score have different probabilities of selecting item response categories.
Q5. Why is the item response theory considered to be unidimensional?
Because mental health constructs are generally complex, item response results are rarely strictly unidimensional (Reise, Morizot, & Hays, 2007).
Q6. Why did the authors choose the CAT simulation to stop applying new items when the latent trait estimate?
Because this study was set out to find both reliable and shorter measures, the authors specified that the CAT simulation stopped applying new items when the latent trait estimate of a patient reached a SE( ) 0.3, comparable to a marginal reliability of .90 (Green, Bock, Humphreys, Linn, & Reckase, 1984).
Q7. What is the way to deal with the lower psychometric quality of the SA scale?
A solution to deal with the lower psychometric quality of the SA scale is by setting a minimum number of items the CAT should administer or by specifying a more strict SE; for example, SE( ) 0.25.
Q8. What is the main advantage of the GRM model?
The GRM is often the preferred IRT model, because it is easier to illustrate to test users than other models, and the item parameters are easy to interpret with regard to responder behavior (Ostini, Finkelman, & Nering, 2015; Smits et al., 2011).