scispace - formally typeset
Open AccessJournal ArticleDOI

The time on task effect in reading and problem solving is moderated by task difficulty and skill: Insights from a computer-based large-scale assessment.

TLDR
In this article, a dual processing theory framework was used to analyze the effect of task difficulty and skill level on the time on task effect of computer-based reading and problem-solving tasks.
Abstract
Computer-based assessment can provide new insights into behavioral processes of task completion that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the task completion process. Psychologically, time on task has 2 different interpretations, suggesting opposing associations with task outcome: Spending more time may be positively related to the outcome as the task is completed more carefully. However, the relation may be negative if working more fluently, and thus faster, reflects higher skill level. Using a dual processing theory framework, the present study argues that the validity of each assumption is dependent on the relative degree of controlled versus routine cognitive processing required by a task, as well as a person’s acquired skill. A total of 1,020 persons ages 16 to 65 years participated in the German field test of the Programme for the International Assessment of Adult Competencies. Test takers completed computer-based reading and problem solving tasks. As revealed by linear mixed models, in problem solving, which required controlled processing, the time on task effect was positive and increased with task difficulty. In reading tasks, which required more routine processing, the time on task effect was negative and the more negative, the easier a task was. In problem solving, the positive time on task effect decreased with increasing skill level. In reading, the negative time on task effect increased with increasing skill level. These heterogeneous effects suggest that time on task has no uniform interpretation but is a function of task difficulty and individual skill.

read more

Content maybe subject to copyright    Report

Goldhammer, Frank; Naumann, Johannes; Stelter, Annette; Tóth, Krisztina; Rölke, Heiko; Klieme, Eckhard
The time on task effect in reading and problem solving is moderated by task
difficulty and skill. Insights from a computer-based large-scale assessment
formal und inhaltlich überarbeitete Version der Originalveröffentlichung in:
formally and content revised edition of the original source in:
The Journal of educational psychology 106 (2014) 3, S. 608-626
Bitte verwenden Sie in der Quellenangabe folgende URN oder DOI /
Please use the following URN or DOI for reference:
urn:nbn:de:0111-pedocs-179679
10.25656/01:17967
https://nbn-resolving.org/urn:nbn:de:0111-pedocs-179679
https://doi.org/10.25656/01:17967
Nutzungsbedingungen Terms of use
Gewährt wird ein nicht exklusives, nicht übertragbares, persönliches und
beschränktes Recht auf Nutzung dieses Dokuments. Dieses Dokument ist
ausschließlich für den persönlichen, nicht-kommerziellen Gebrauch
bestimmt. Die Nutzung stellt keine Übertragung des Eigentumsrechts an
diesem Dokument dar und gilt vorbehaltlich der folgenden Einschränkungen:
Auf sämtlichen Kopien dieses Dokuments müssen alle
Urheberrechtshinweise und sonstigen Hinweise auf gesetzlichen Schutz
beibehalten werden. Sie dürfen dieses Dokument nicht in irgendeiner Weise
abändern, noch dürfen Sie dieses Dokument für öffentliche oder
kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, aufführen,
vertreiben oder anderweitig nutzen.
We grant a non-exclusive, non-transferable, individual and limited right to
using this document.
This document is solely intended for your personal, non-commercial use. Use
of this document does not include any transfer of property rights and it is
conditional to the following limitations: All of the copies of this documents must
retain all copyright information and other information regarding legal
protection. You are not allowed to alter this document in any way, to copy it for
public or commercial purposes, to exhibit the document in public, to perform,
distribute or otherwise use the document in public.
Mit der Verwendung dieses Dokuments erkennen Sie die
Nutzungsbedingungen an.
By using this particular document, you accept the above-stated conditions of
use.
Kontakt / Contact:
pe
DOCS
DIPF | Leibniz-Institut für Bildungsforschung und Bildungsinformation
Informationszentrum (IZ) Bildung
E-Mail: pedocs@dipf.de
Internet: www.pedocs.de

©American Psychological Association, 2014. This paper is not the copy
of record and may not exactly replicate the authoritative document
published in the APA journal. Please do not copy or cite without author's
permission. The final article is available, upon publication, at:
doi:10.1037/a0034716

1
The Time on Task Effect in Reading and Problem Solving Is Moderated
by Task Difficulty and Skill: Insights From a Computer-Based Large-
Scale Assessment
Frank Goldhammer
ab
, Johannes Naumann
ab
, Annette Stelter
a
, Krisztina Tóth
a
, Heiko Rölke
a
, Eckhard
Klieme
ab
a
German Institute for International Educational Research (DIPF), Frankfurt/Main, Germany
b
Centre for International Student Assessment (ZIB), Frankfurt/Main, Germany
Abstract
Computer-based assessment can provide new insights into behavioral processes of task completion
that cannot be uncovered by paper-based instruments. Time presents a major characteristic of the
task completion process. Psychologically, time on task has 2 different interpretations, suggesting
opposing associations with task outcome: Spending more time may be positively related to the
outcome as the task is completed more carefully. However, the relation may be negative if working
more fluently, and thus faster, reflects higher skill level. Using a dual processing theory framework,
the present study argues that the validity of each assumption is dependent on the relative degree of
controlled versus routine cognitive processing required by a task, as well as a person’s acquired skill.
A total of 1,020 persons ages 16 to 65 years participated in the German field test of the Programme
for the International Assessment of Adult Competencies. Test takers completed computer-based
reading and problem solving tasks. As revealed by linear mixed models, in problem solving, which
required controlled processing, the time on task effect was positive and increased with task difficulty.
In reading tasks, which required more routine processing, the time on task effect was negative and
the more negative, the easier a task was. In problem solving, the positive time on task effect
decreased with increasing skill level. In reading, the negative time on task effect increased with
increasing skill level. These heterogeneous effects suggest that time on task has no uniform
interpretation but is a function of task difficulty and individual skill.
Keywords: computer-based assessment, time on task, automatic and controlled processing, reading
literacy, problem solving
This research was supported by a grant of the Deutsche Forschungsgemeinschaft (DFG), awarded to
Frank Goldhammer, Johannes Naumann, and Heiko Rölke (GO 1979/1-1). We are grateful to Beatrice
Rammstedt and her group at GESIS (http://gesis.org), as well as the Federal Ministry for Education
and Research (BMBF) for making the data available for this study. Correspondence concerning this
article should be addressed to Frank Goldhammer, German Institute for International Educational
Research (DIPF), Rostocker Straße 6, 60323 Frankfurt/Main, Germany. E-mail: goldhammer@dipf.de

2
There are two fundamental observations on human performance: the result obtained on a task and
the time taken (e.g., Ebel, 1953). In educational assessment, the focus is mainly on the task outcome;
behavioral processes that led to the result are usually not considered. One reason may be that
traditional assessments are paper-based and, hence, are not suitable for collecting behavioral
process data at the task level (cf. Scheuermann & Björnsson, 2009). However, computer-based
assessment besides other advantages, such as increased construct validity (e.g., Sireci & Zenisky,
2006) or improved test design (e.g., van der Linden, 2005) - can provide further insights into the task
completion process. This is because in computer-based assessment, log file data can be recorded by
the assessment system that allows the researcher to derive theoretically meaningful descriptors of
the task completion process. The present study draws on log file data from an international
computer-based large-scale assessment to address the question of how time on task is related to the
task outcome. As shown in the following, by analyzing the relation of task performance to the time
test takers spent on task, we were able to obtain new insights into how the interaction of task and
person characteristics determines the way of cognitive processing. For instance, this can contribute
to the validation of the assessment, if time on task can be related to the task response in a
theoretically sound way.
Time on task is an important characteristic of the solution process indicating the duration of
perceptual, cognitive, and psychomotorical activities. From a measurement point of view, the
usefulness of time on task and the task outcome, respectively, depend on the tasks’ difficulty. In easy
tasks assessing basic skills, individual differences will mainly occur in response latencies, whereas
accuracy will be consistently high. Following this logic, a number of assessment tools that address
constructs like naming speed (e.g., Nicolson & Fawcett, 1994), visual word recognition (e.g., Balota,
Cortese, Sergent-Marshall, Spieler, & Yap, 2004), or number naming speed (e.g., Krajewski &
Schneider, 2009) make use of time on task. In contrast, in more difficult tasks the accuracy of a result
is of interest, for example, in assessments of reading comprehension (e.g., van den Broek, & Espin,
2012) or problem solving (e.g., Greiff, Wüstenberg, et al., 2013; Klieme, 2004; Mayer, 1994; Wirth &
Klieme, 2003). In these skill assessments, time on task usually is not taken into account.
Nevertheless, both the task result and time on task constitute task performance regardless of the
task’s difficulty.
In skill assessments, the relation between time on task and task result (accuracy) can be conceived of
in two ways. On the one hand, taking more time to work on a task may be positively related to the
result as the task is completed more thoroughly. On the other hand, the relation may be negative if
working faster and more fluently reflects a higher skill level. The present study addresses these
contradictory predictions and aims at clarifying the conditions of their validity by jointly analyzing
task success and time on task data from the computer-based Programme for the International
Assessment of Adult Competencies (PIAAC; cf. OECD, 2013; Schleicher, 2008). Thus, we take
advantage of the fact that computer-based assessment renders data available on a large scale that
was previously available only through small-scale experimenting (i.e., time on task). Data such as
time spent on individual tasks can serve to answer basic research questions (such as clarifying the

3
relation of time on task and task result in different domains). Furthermore, the data can enhance
educational assessment. For instance, construct validation can be supported by testing whether
behavioral process indicators are related to task outcomes as expected from theory.
Time on Task
Time on task is understood as the time from task onset to task completion. Thus, if the task was
completed in order, it reflects the time taken to become familiar with the task, to process the
materials provided to solve the task, to think about the solution, and to give a response.
1
In tasks
requiring the participant to interact with the stimulus through multiple steps, time on task can be
further split into components, for instance, reflecting the time taken to process a single page from a
multipage stimulus. To model time on task, two different approaches have been suggested (cf. van
der Linden, 2007, 2009). First, time is considered an indicator of a (latent) construct, for example,
reading speed (Carver, 1992) or reasoning speed (Goldhammer & Klein Entink, 2011). Here, response
and time data are modeled using separate measurement models. Second, within an explanatory item
response model, time is used as a predictor to explain differences in task success (cf. Roskam, 1997).
In the present study, this second approach is used to investigate the relation between time on task
and task success. Task success (dependent variable) can be perceived as a function of time on task
(independent variable) because the individual is able to control time spent on completing a task to
some extent, which in turn may affect the probability of attaining the correct result (cf. van der
Linden, 2009).
Relation of Time on Task to Task Success
When investigating the relation between time on task and task success, the well-known speed
accuracy tradeoff, which is usually investigated in experimental research (cf. Luce, 1986), has to be
taken into account. Tradeoff means that for a given person working on a particular task, accuracy will
decrease as the person works faster. The positive relation between time on task and task success, as
predicted by the speedaccuracy tradeoff, is a within-person phenomenon that can be expected for
any task (e.g., Wickelgren, 1977). However, when switching from the within-person level to a
population, the relation between time on task and task success might be completely different, for
instance, a negative or no relation, although within each person, the speedaccuracy compromise
remains as the positive relation between time on task and task success (cf. van der Linden, 2007).
Consequently, at the population level, findings on the relation of time on task with task success may
be heterogeneous. One line of research modeling time on task as an indicator of speed provides
speedskill or speedability correlations of different directions and strengths across domains. For
example, for reasoning, positive correlations between skill (measured through task success) and
slowness (measured through time on task) were found (e.g., Goldhammer & Klein Entink, 2011; Klein
1
Depending on what is considered to be a task, there may be alternative definitions of time on task. For
instance, in this special section, Kupiainen, Vainikainen, Marjanen, and Hautamäki (2014) use the term time on
task to refer to the time needed to complete a test in a learning to learn assessment, whereas response time is
considered to represent the time needed to respond to a single question or problem (which is comparable to
our notion of time on task).

Figures
Citations
More filters
Journal ArticleDOI

Best practice guidance for linear mixed-effects models in psychological science

TL;DR: The use of linear mixed-effects models (LMMs) is set to dominate statistical analyses in psychological science and may become the default approach to analyzing quantitative data as discussed by the authors, however, there has been a proliferation of differences in practice.
Journal ArticleDOI

On Detecting Systematic Measurement Error in Cross-Cultural Research: A Review and Critical Reflection on Equivalence and Invariance Tests:

TL;DR: One major threat to revealing cultural influences on psychological states or processes is the presence of bias (i.e., systematic measurement error) when quantitative measures are not targeting the bias.
Journal ArticleDOI

Computer-generated log-file analyses as a window into students' minds? A showcase study based on the PISA 2012 assessment of problem solving

TL;DR: Log-file data from the CPS item Climate Control encompassing N?=?16,219 students from 44 countries and economies indicated that the VOTAT strategy was strongly related to performance on the Climate Control item as well as to overall problem solving proficiency, and that there were different levels of non-mastery that ranged from applying no systematic strategic behavior to actually applying VOTat but still failing to solve the item.
Journal ArticleDOI

Understanding students' performance in a computer-based assessment of complex problem solving

TL;DR: Results indicated that students who occasionally observed the problem environment in a noninterfering way in addition to actively exploring it showed better CPS performance, whereas students who showed a high frequency of (potentially unplanned) interventions exhibited worse CPS performance.
Journal ArticleDOI

An Overview of Models for Response Times and Processes in Cognitive Tests.

TL;DR: While it seems well-possible to differentiate rapid guessing from normal problem solving, further decompositions of response times are rarely made, although possible based on some of model approaches.
References
More filters
Journal ArticleDOI

Generalized linear mixed models: a practical guide for ecology and evolution

TL;DR: The use (and misuse) of GLMMs in ecology and evolution are reviewed, estimation and inference are discussed, and 'best-practice' data analysis procedures for scientists facing this challenge are summarized.
Journal ArticleDOI

Controlled and automatic human information processing: II. Perceptual learning, automatic attending and a general theory.

TL;DR: Tested the 2-process theory of detection, search, and attention presented by the current authors (1977) in a series of experiments and demonstrated the qualitative difference between 2 modes of information processing: automatic detection and controlled search.
Journal ArticleDOI

Mixed-effects modeling with crossed random effects for subjects and items

TL;DR: In this article, the authors provide an introduction to mixed-effects models for the analysis of repeated measurement data with subjects and items as crossed random effects, and a worked-out example of how to use recent software for mixed effects modeling is provided.
Journal Article

Controlled and Automatic Human Information Processing: 1. Detection, Search, and Attention.

TL;DR: A series of studies using both reaction time and accuracy measures is presented, which traces these concepts in the form of automatic detection and controlled, search through the areas of detection, search, and attention and resolves a number of apparent conflicts in the literature.
Book

Comprehension: A Paradigm for Cognition

TL;DR: This work proposes a new model of comprehension processes: the construction-integration model, which combines the role of working memory, Cognition and representation, and Propositional representations.
Related Papers (5)
Frequently Asked Questions (14)
Q1. What contributions have the authors mentioned in the paper "The time on task effect in reading and problem solving is moderated by task difficulty and skill. insights from a computer-based large-scale assessment" ?

Using a dual processing theory framework, the present study argues that the validity of each assumption is dependent on the relative degree of controlled versus routine cognitive processing required by a task, as well as a person ’ s acquired skill. These heterogeneous effects suggest that time on task has no uniform interpretation but is a function of task difficulty and individual skill. 

A negative time on task effect was expected for reading tasks because, in reading tasks, a number of component cognitive processes are apt for automatization. 

As reading tasks become more difficult and readers need to engage in strategic and thus controlled cognitive processing, the negative time on task effect will be diminished or reversed. 

For combinations of two tasks (easy vs. hard) with two persons (less able vs. able), the probability of obtaining a correct response is plotted as a function of time on task. 

By moving from the global process measure of time on task to the underlying constituents, the authors can further validate the interpretation of the time on task effect. 

For comparing nested models, the likelihood ratio (LR) test was used, which is appropriate for inference on random effects (Bolker etal., 2009). 

Schneider and Fisk (1983) described skilled behavior in problem solving and strategyplanning as a function of controlled processing. 

An explanation for the heterogeneity of associations between time on task and task success may be provided by dual processing theory, which distinguishes between automatic and controlled mental processes (cf. Fitts & Posner, 1967; Schneider & Chein, 2003; Schneider & Shiffrin, 1977). 

In line with this reasoning, Klein Entink et al. (2009)showed that test effort in a reasoning test, that is, the extent to which a test taker cares about the result, is positively related to test-taking slowness (measured through time on task), which itself is positively related to skill (measured through task success). 

On the one hand, the authors predict that the time on task effect gets more positive for less skilled problem solvers and less negative for less skilled readers since they are expected to accomplish tasks with higher demands of controlled and strategic processing than skilled persons. 

In measurement modelsof item response theory, for instance, the effect of each item or task i on the probability of obtaining a correct response is typically estimated as a fixed effect representing the task’s difficulty or easiness. 

As a measure of effect size, the authors computed Nagelkerke’s R2, which was .25, that is, about a quarter of the response variability could be explained by the component time predictors. 

Two of five Web pages in this task meet the criteria specified in the instruction and have to be bookmarked to obtain a correct response. 

A comparison of tasks across studies reveals that in difficult tasks assessing for instance reasoning, task success is positively related to time on task, whereas in easy tasks, such as basic interactions with a computer interface, the relation is negative.