scispace - formally typeset
Search or ask a question

Showing papers by "Frank Goldhammer published in 2019"


01 Jan 2019
TL;DR: Waxmann et al. as discussed by the authors, 2019, 408 S.S.Munster, New York : Waxmann 2019, 410 S. S.MUNSTER.
Abstract: Munster ; New York : Waxmann 2019, 408 S. Padagogische Teildisziplin: Empirische Bildungsforschung; Medienpadagogik;

51 citations


Journal ArticleDOI
TL;DR: A significant negative average effect is found for the delay indicator, indicating that early planning in CPS is more beneficial and effects depending on task and interaction effects for all three indicators are found, suggesting that the effects of different planning behaviors on CPS are highly intertwined.
Abstract: Complex problem solving (CPS) is a highly transversal competence needed in educational and vocational settings as well as everyday life. The assessment of CPS is often computer-based, and therefore provides data regarding not only the outcome but also the process of CPS. However, research addressing this issue is scarce. In this article we investigated planning activities in the process of complex problem solving. We operationalized planning through three behavioral measures indicating the duration of the longest planning interval, the delay of the longest planning interval and the variance of intervals between each two successive interactions. We found a significant negative average effect for our delay indicator, indicating that early planning in CPS is more beneficial. However, we also found effects depending on task and interaction effects for all three indicators, suggesting that the effects of different planning behaviors on CPS are highly intertwined.

42 citations


Journal ArticleDOI
TL;DR: The study demonstrates the theory‐informed construction of process variables from log‐files and an approach for empirical validation of their interpretation and suggests that students apply sourcing for different reasons, but also stresses the need of further validation studies and refinements in the operationalization of the indicators investigated.
Abstract: Background With digital technologies, competence assessments can provide process data, such as mouse clicks with corresponding timestamps, as additional information about the skills and strategies of test takers. However, in order to use variables generated from process data sensibly for educational purposes, their interpretation needs to be validated with regard to their intended meaning. Aims This study seeks to demonstrate how process data from an assessment of multiple document comprehension can be used to represent sourcing, which summarizes activities for the consideration of the origin and intention of documents. The investigated process variables were created according to theoretical assumptions about sourcing, and systematically tested for differences between persons, units (i.e., documents and items), and properties of the test administration. Sample The sample included 310 German university students (79.4% female), enrolled in several bachelor's or master's programmes of the social sciences and humanities. Methods Regarding the hierarchical data structure, the hypotheses were analysed with generalized linear mixed models (GLMM). Results The results mostly revealed expected differences between individuals and units. However, unexpected effects of the administered order of units and documents were detected. Conclusions The study demonstrates the theory-informed construction of process variables from log-files and an approach for empirical validation of their interpretation. The results suggest that students apply sourcing for different reasons, but also stress the need of further validation studies and refinements in the operationalization of the indicators investigated.

21 citations


Journal ArticleDOI
TL;DR: The study shows the suitability of the proposed validation approach that uses processing times to collect validity evidence for the construct interpretation of test scores and uses time information in association with task results to bring construct validation closer to the actual response process than widely used correlations oftest scores.
Abstract: A validity approach is proposed that uses processing times to collect validity evidence for the construct interpretation of test scores. The rationale of the approach is based on current research of processing times and on classical validity approaches, providing validity evidence based on relationships with other variables. Within the new approach, convergent validity evidence is obtained if a component skill, that is expected to underlie the task solution process in the target construct, positively moderates the relationship between effective speed and effective ability in the corresponding target construct. Discriminant validity evidence is provided if a component skill, that is not expected to underlie the task solution process in the target construct, does indeed not moderate the speed-ability relation in this target construct. Using data from a study that follows up the German PIAAC sample, this approach was applied to reading competence, assessed with PIAAC literacy items, and to quantitative reasoning, assessed with Number Series. As expected from theory, the effect of speed on ability in the target construct was only moderated by the respective underlying component skill, that is, word meaning activation skill as an underlying component skill of reading competence, and perceptual speed as an underlying component skill of reasoning. Accordingly, no positive interactions were found for the component skill that should not underlie the task solution process, that is, word meaning activation for reasoning and perceptual speed for reading. Furthermore, the study shows the suitability of the proposed validation approach. The use of time information in association with task results brings construct validation closer to the actual response process than widely used correlations of test scores.

16 citations


Journal ArticleDOI
TL;DR: The ability mode effect was identified as the latent difference between reading measured in CBA and PBA, and similar to the gender effect the mode effect in ability was observed together with a difference in the latent speed between modes.
Abstract: In this paper, we developed a method to extract item-level response times from log data that are available in computer-based assessments (CBA) and paper-based assessments (PBA) with digital pens. Based on response times that were extracted using only time differences between responses, we used the bivariate generalized linear IRT model framework (B-GLIRT, Molenaar, Tuerlinckx & von der Maas, 2015) to investigate response times as indicators for response processes. A parameterization that includes an interaction between the latent speed factor and the latent ability factor in the cross-relation function was found to fit the data best in CBA and PBA. Data were collected with a within-subject design in a national add-on study to PISA 2012 administering two clusters of PISA 2009 reading units. After investigating the invariance of the measurement models for ability and speed between boys and girls, we found the expected gender effect in reading ability to coincide with a gender effect in speed in CBA. Taking this result as indication for the validity of the time measures extracted from time differences between responses, we analyzed the PBA data and found the same gender effects for ability and speed. Analyzing PBA and CBA data together we identified the ability mode effect as the latent difference between reading measured in CBA and PBA. Similar to the gender effect the mode effect in ability was observed together with a difference in the latent speed between modes. However, while the relationship between speed and ability is identical for boys and girls we found hints for mode differences in the estimated parameters of the cross-relation function used in the B-GLIRT model.

12 citations


Journal ArticleDOI
TL;DR: The investigation sought to determine whether mode effects can be explained by item properties and showed that splitting texts between multiple screens, did not affect comparability, but item difficulty was increased in CBA when items in the first and second position of a unit were not presented on the same double-page as in PBA.

12 citations


Journal ArticleDOI
TL;DR: In this article, the cognitive load of students working on tasks that require the comprehension of multiple documents (MDC) was investigated, in a sample of 310 studi cation students.
Abstract: . The study investigates the cognitive load of students working on tasks that require the comprehension of multiple documents (Multiple Document Comprehension, MDC). In a sample of 310 stud...

11 citations



Journal ArticleDOI
TL;DR: In 2015, the Programme for International Student Assessment (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper-to computer-based assessment as mentioned in this paper.
Abstract: In 2015, the Programme for International Student Assessment (PISA) introduced multiple changes in its study design, the most extensive being the transition from paper- to computer-based ass...

6 citations


Book ChapterDOI
01 Jan 2019
TL;DR: This chapter presents a strategy that allows the integration of results from UOA into the results from proctored computerized assessments and generalizes the idea of motivational filtering, known for the treatment of rapid guessing behavior in low-stakes assessment.
Abstract: Many large-scale competence assessments such as the National Educational Panel Study (NEPS) have introduced novel test designs to improve response rates and measurement precision. In particular, unstandardized online assessments (UOA) offer an economic approach to reach heterogeneous populations that otherwise would not participate in face-to-face assessments. Acknowledging the difference between delivery, mode, and test setting, this chapter extends the theoretical background for dealing with mode effects in NEPS competence assessments (Kroehne and Martens in Zeitschrift fur Erziehungswissenschaft 14:169–186, 2011 2011) and discusses two specific facets of UOA: (a) the confounding of selection and setting effects and (b) the role of test-taking behavior as mediator variable. We present a strategy that allows the integration of results from UOA into the results from proctored computerized assessments and generalizes the idea of motivational filtering, known for the treatment of rapid guessing behavior in low-stakes assessment. We particularly emphasize the relationship between paradata and the investigation of test-taking behavior, and illustrate how a reference sample formed by competence assessments under standardized and supervised conditions can be used to increase the comparability of UOA in mixed-mode designs. The closing discussion reflects on the trade-off between data quality and the benefits of UOA.

5 citations


DOI
01 Jan 2019
TL;DR: Eickelmann, Birgit as discussed by the authors, Bos, Wilfried, Gerick, Julia, Goldhammer, Frank [Hrsg], Schaumburg, Heike [HRSg], Schwippert, Knut, Senkbeil, Martin [Hrg], Vahrenhold, Jan [Hhrg].
Abstract: Eickelmann, Birgit [Hrsg.]; Bos, Wilfried [Hrsg.]; Gerick, Julia [Hrsg.]; Goldhammer, Frank [Hrsg.]; Schaumburg, Heike [Hrsg.]; Schwippert, Knut [Hrsg.]; Senkbeil, Martin [Hrsg.]; Vahrenhold, Jan [Hrsg.]: ICILS 2018 #Deutschland. Computer- und informationsbezogene Kompetenzen von Schulerinnen und Schulern im zweiten internationalen Vergleich und Kompetenzen im Bereich Computational Thinking. Munster ; New York : Waxmann 2019, S. 33-77 Padagogische Teildisziplin: Empirische Bildungsforschung; Medienpadagogik;

Book
01 Jan 2019
TL;DR: In this article, the IEA-Studie ICILS 2018 (International Computer and Information Literacy Study) werden zum zweiten Mal nach 2013 computer-and informationsbezogene Kompetenzen von Achtklasslerinnen und Acht-klasslern im internationalen Vergleich gemessen.
Abstract: eleed, Iss. 13 - Mit der IEA-Studie ICILS 2018 (International Computer and Information Literacy Study) werden zum zweiten Mal nach 2013 computer- und informationsbezogene Kompetenzen von Achtklasslerinnen und Achtklasslern im internationalen Vergleich gemessen. Die Studie liefert damit ein aktuelles Bild uber den Stand der digitalen Bildung in Deutschland. In diesem Band werden zudem erstmals empirisch basiert Aussagen zu Entwicklungen uber einen mehrjahrigen Zeitraum prasentiert. Ebenfalls erstmalig werden im Rahmen eines internationalen Zusatzmoduls Befunde zum Kompetenzbereich ‚Computational Thinking‘ vorgelegt, der das Losen von Problemen und den kompetenten Umgang mit algorithmischen Strukturen anspricht. Uber die detaillierte Erfassung der Rahmenbedingungen auf Schul-, Lehrer- und Schulerebene sowie aus der Perspektive der Schulleitungen und der IT-Koordinatorinnen und -Koordinatoren ergibt sich ein Gesamtbild uber technologische und padagogische Entwicklungen in Deutschland im internationalen Vergleich. Ergebnisse zu zentralen Hintergrundmerkmalen der Schulerinnen und Schuler sowie zu Schulformunterschieden erganzen die Befunde. Der Band liefert so Informationen zur (Weiter-)Entwicklung von Schule und Unterricht sowie zur Lehrerprofessionalisierung im Kontext digitaler Transformationsprozesse.