scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Response Time Effort: A New Measure of Examinee Motivation in Computer-Based Tests

01 Apr 2005-Applied Measurement in Education (Lawrence Erlbaum Associates, Inc.)-Vol. 18, Iss: 2, pp 163-183
TL;DR: In this article, the authors introduce a new measure, termed response time effort (RTE), which is based on the hypothesis that unmotivated examinees will answer too quickly (i.e., before they have time to read and fully consider the item).
Abstract: When low-stakes assessments are administered, the degree to which examinees give their best effort is often unclear, complicating the validity and interpretation of the resulting test scores. This study introduces a new method, based on item response time, for measuring examinee test-taking effort on computer-based test items. This measure, termed response time effort (RTE), is based on the hypothesis that when administered an item, unmotivated examinees will answer too quickly (i.e., before they have time to read and fully consider the item). Psychometric characteristics of RTE scores were empirically investigated and supportive evidence for score reliability and validity was found. Potential applications of RTE scores and their implications are discussed.

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the authors summarize existing approaches to detect insufficient effort responding (IER) to low-stakes surveys and comprehensively evaluate these approaches and provide convergent validity evidence regarding various indices for IER.
Abstract: Responses provided by unmotivated survey participants in a careless, haphazard, or random fashion can threaten the quality of data in psychological and organizational research. The purpose of this study was to summarize existing approaches to detect insufficient effort responding (IER) to low-stakes surveys and to comprehensively evaluate these approaches. In an experiment (Study 1) and a nonexperimental survey (Study 2), 725 undergraduates responded to a personality survey online. Study 1 examined the presentation of warnings to respondents as a means of deterrence and showed the relative effectiveness of four indices for detecting IE responses: response time, long string, psychometric antonyms, and individual reliability coefficients. Study 2 demonstrated that the detection indices measured the same underlying construct and showed the improvement of psychometric properties (item interrelatedness, facet dimensionality, and factor structure) after removing IE respondents identified by each index. Three approaches (response time, psychometric antonyms, and individual reliability) with high specificity and moderate sensitivity were recommended as candidates for future application in survey research. The identification of effective IER indices may help researchers ensure the quality of their low-stake survey data. This study is a first attempt to comprehensively evaluate IER detection methods using both experimental and nonexperimental designs. Results from both studies corroborated each other in suggesting the three more effective approaches. This study also provided convergent validity evidence regarding various indices for IER.

707 citations


Cites methods from "Response Time Effort: A New Measure..."

  • ...Using computer-based assessments from 472 new college students, Wise and Kong (2005) found that response time converged significantly with self-reported effort and person-fit statistics in indicating response effort....

    [...]

  • ...Using computer-based assessments from 472 new college students, Wise and Kong (2005) found that response time converged significantly with self-reported effort and person-fit statistics in indicating response effort....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the advantages of using latent profile analysis (LPA) over other traditional techniques (such as multiple regression and cluster analysis) when analyzing multidimensional data like achievement goals are discussed.

570 citations

Journal ArticleDOI
TL;DR: The authors showed that using a single Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers, and they concluded that attention is best measured using multiple screeners and that studies using screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention.
Abstract: Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss “Screeners” as a potential solution to this problem. We first demonstrate Screeners’ power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple items to measure attention. We also show that Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers. We conclude that attention is best measured using multiple Screener questions and that studies using Screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention.

564 citations


Cites background from "Response Time Effort: A New Measure..."

  • ...…information, we show that Screener passage is associated with greater time spent on additional both political science and psychology that uses amount of time spent on a survey page as a measure of respondent effort (Huang et al. 2012; Malhotra 2008; Wise and DeMars 2006; Wise and Kong 2005)....

    [...]

Journal ArticleDOI
TL;DR: In this paper, the authors investigated the relationship between information literacy skills and self-assessments predicted by competency theory in the domain of information literacy and found a significant negative correlation between the information literacy scores and knowledge of the library.

256 citations


Cites background from "Response Time Effort: A New Measure..."

  • ...1 For a discussion of response time effort analysis, see Wise and Kong (2005)....

    [...]

Journal ArticleDOI
TL;DR: In this article, the authors found that students' self-report motivation significantly predicted test scores and a substantial performance gap emerged between students in different motivational conditions (effect size as large as.68).
Abstract: With the pressing need for accountability in higher education, standardized outcomes assessments have been widely used to evaluate learning and inform policy. However, the critical question on how scores are influenced by students’ motivation has been insufficiently addressed. Using random assignment, we administered a multiple-choice test and an essay across three motivational conditions. Students’ self-report motivation was also collected. Motivation significantly predicted test scores. A substantial performance gap emerged between students in different motivational conditions (effect size as large as .68). Depending on the test format and condition, conclusions about college learning gain (i.e., value added) varied dramatically from substantial gain (d = 0.72) to negative gain (d = −0.23). The findings have significant implications for higher education stakeholders at many levels.

243 citations


Cites background or methods from "Response Time Effort: A New Measure..."

  • ...Besides relying on student self-report, researchers have also examined response time effort (RTE) for computer-based, unspeeded tests to determine student motivation (S. L. Wise & DeMars, 2006; S. L. Wise & Kong, 2005)....

    [...]

  • ...To eliminate the impact of low performance motivation on test results, researchers have explored ways to filter responses from unmotivated students identified through either their selfreport or response time effort (S. L. Wise & DeMars, 2005, 2006; S. L. Wise & Kong, 2005; V. L. Wise et al., 2006)....

    [...]

  • ...In addition, most previous studies relied on data from a single program or single institution (Sundre & Kitsantas, 2004; S. L. Wise & Kong, 2005; V. L. Wise et al., 2006; Wolf & Smith, 1995), which may limit the generalizability of the findings....

    [...]

References
More filters
Book
01 Mar 1981

7,518 citations


"Response Time Effort: A New Measure..." refers background or methods in this paper

  • ...Early research has explored the use of response time information in obtaining more accurate proficiency level estimates (Rasch, 1960; Tatsuoka & Tatsuoka, 1980; Thissen, 1983)....

    [...]

  • ...A final research question concerns the use of response time information in the proficiency estimation process as has been explored previously (Rasch, 1960; Tatsuoka & Tatsuoka, 1980; Thissen, 1983)....

    [...]

Book
21 Sep 1995
TL;DR: Motivation: Introduction and Historical Foundations, and theories of Motivation, and Teacher Influences.
Abstract: Chapter 1. Motivation: Introduction and Historical Foundations Chapter 2. Expectancy-Value Theories of Motivation Chapter 3. Attribution Theory Chapter 4. Social Cognitive Theory Chapter 5. Goals and Goal Orientations Chapter 6. Interest and Affect Chapter 7. Intrinsic and Extrinsic Motivation Chapter 8. Sociocultural Influences Chapter 9. Teacher Influences Chapter 10. Classroom and School Influences Glossary References Name Index Subject Index

5,046 citations


"Response Time Effort: A New Measure..." refers background in this paper

  • ...For example, motivation researchers have found that some individuals are predisposed to attribute failure on a task to lack of effort over lack of ability (see Pintrich & Schunk, 2002, for a good discussion of attribution theory)....

    [...]

Journal ArticleDOI
TL;DR: In this article, a theoretical model of test-taking motivation is presented, with a synthesis of previous research indicating that low student motivation is associated with a substantial decrease in test performance.
Abstract: Student test-taking motivation in low-stakes assessment testing is examined in terms of both its relationship to test performance and the implications of low student effort for test validity. A theoretical model of test-taking motivation is presented, with a synthesis of previous research indicating that low student motivation is associated with a substantial decrease in test performance. A number of assessment practices and data analytic procedures for managing the problems posed by low student motivation are discussed.

435 citations


"Response Time Effort: A New Measure..." refers methods in this paper

  • ...…filtering based on self-reported examinee effort was used, (a) test performance improved, (b) test score reliability remained relatively constant, and (c) the correlation between test performance and an external variable showed a substantial increase (Sundre & Wise, 2003; Wise & DeMars, 2005)....

    [...]

  • ...Wise and DeMars (2005) conducted a synthesis of 15 of these studies, finding an average effect size exceeding ½ SD between the two groups....

    [...]

  • ...In each of the studies cited in Wise and DeMars (2005), examinee test-taking effort was measured using posttest examinee self-reports....

    [...]

Journal ArticleDOI
TL;DR: Person-fit methods based on classical test theory and item response theory (IRT), and methods investigating particular types of response behavior on tests, are examined in this paper, where the usefulness of person-fit statistics for improving measurement depends on the application.
Abstract: Person-fit methods based on classical test theory-and item response theory (IRT), and methods investigating particular types of response behavior on tests, are examined. Similarities and differences among person-fit methods and their advantages and disadvantages are discussed. Sound person-fit methods have been derived for the Rasch model. For other IRT models, the empirical and theoretical distributions differ for most person-fit statistics when used with short and moderate length tests. The detection rate of person-fit statistics depends on the type of misfitting item-score patterns, test length, and trait levels. The usefulness of person-fit statistics for improving measurement depends on the application.

369 citations


"Response Time Effort: A New Measure..." refers background in this paper

  • ...Meijer and Sijtsma (2001) provided a good overview of person-fit research....

    [...]