scispace - formally typeset
Search or ask a question

Showing papers by "Bas Giesbers published in 2015"


Journal ArticleDOI
TL;DR: This empirical contribution provides an application of Buckingham Shum and Deakin Crick's theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs.

352 citations


Journal ArticleDOI
TL;DR: In this article, an empirical study based on a large sample of university students was conducted to demonstrate that relaxing these stringent assumptions, and thereby using the meaning system framework to its full potential, will provide strong benefits: effort beliefs are crucial mediators of relationships between implicit theories and achievement goals and academic motivations.
Abstract: Empirical studies into meaning systems surrounding implicit theories of intelligence typically entail two stringent assumptions: that different implicit theories and different effort beliefs represent opposite poles on a single scale, and that implicit theories directly impact the constructs as achievement goals and academic motivations. Through an empirical study based on a large sample of university students, we aim to demonstrate that relaxing these stringent assumptions, and thereby using the meaning system framework to its full potential, will provide strong benefits: effort beliefs are crucial mediators of relationships between implicit theories and achievement goals and academic motivations, and the different poles of implicit theories and effort beliefs do expose different relationships with goal setting behaviour and academic motivations. A structural equation model, cross-validated by demonstrating gender-invariance of path coefficients, demonstrates that incremental and entity theory views have less predictive power than positive and negative effort beliefs in explaining achievement goals and motivations.

73 citations


Proceedings ArticleDOI
23 May 2015
TL;DR: Focusing on the predictive power, evidence of both stability and sensitivity of regression type prediction models is provided, and an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning is provided.
Abstract: Learning analytics seek to enhance the learning processes through systematic measurements of learning related data and to provide informative feedback to learners and educators In this follow-up study of previous research (Tempelaar, Rienties, and Giesbers, 2015), we focus on the issues of stability and sensitivity of Learning Analytics (LA) based prediction models Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? This empirical contribution provides an application of Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics: an infrastructure that combines learning dispositions data with data extracted from computer-assisted, formative assessments and LMSs We compare two cohorts of a large introductory quantitative methods module, with 1005 students in the ’13/’14 cohort, and 1006 students in the ’14/’15 cohort Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module Focusing on the predictive power, we provide evidence of both stability and sensitivity of regression type prediction models

12 citations


Book ChapterDOI
23 May 2015
TL;DR: This empirical contribution compares two cohorts of a large module introducing mathematics and statistics, and analyses bivariate and multivariate relationships of module performance and track and disposition data to provide evidence of both stability and sensitivity of prediction models.
Abstract: In this empirical contribution, a follow-up study of previous research [1], we focus on the issues of stability and sensitivity of Learning Analytics based prediction models. Do predictions models stay intact, when the instructional context is repeated in a new cohort of students, and do predictions models indeed change, when relevant aspects of the instructional context are adapted? Applying Buckingham Shum and Deakin Crick’s theoretical framework of dispositional learning analytics combined with formative assessments and learning management systems, we compare two cohorts of a large module introducing mathematics and statistics. Both modules were based on principles of blended learning, combining face-to-face Problem-Based Learning sessions with e-tutorials, and have similar instructional design, except for an intervention into the design of quizzes administered in the module. We analyse bivariate and multivariate relationships of module performance and track and disposition data to provide evidence of both stability and sensitivity of prediction models.

5 citations


Book ChapterDOI
TL;DR: Time on task data appear to be more sensitive to the effects of heterogeneity than mastery data, providing a further argument to prioritize formative assessment mastery data as predictor variables in the design of prediction models directed at the generation of learning feedback.
Abstract: Mastery data derived from formative assessments constitute a rich data set in the development of student performance prediction models. The dominance of formative assessment mastery data over use intensity data such as time on task or number of clicks was the outcome of previous research by the authors in a dispositional learning analytics context [1, 2, 3]. Practical implications of these findings are far reaching, contradicting current practices of developing (learning analytics based) student performance prediction models based on intensity data as central predictor variables. In this empirical follow-up study using data of 2011 students, we search for an explanation for time on task data being dominated by mastery data. We do so by investigating more general models, allowing for nonlinear, even non-monotonic, relationships between time on task and performance measures. Clustering students into subsamples, with different time on task characteristics, suggests heterogeneity of the sample to be an important cause of the nonlinear relationships with performance measures. Time on task data appear to be more sensitive to the effects of heterogeneity than mastery data, providing a further argument to prioritize formative assessment mastery data as predictor variables in the design of prediction models directed at the generation of learning feedback.

4 citations