scispace - formally typeset
Search or ask a question
Proceedings ArticleDOI

Personalized Remedial Recommendations for SQL Programming Practice System

TL;DR: This paper presents a transparent and explainable interface for remedial recommendations in an online programming practice system implemented to support SQL programming practice and evaluated in the context of a large database course.
Abstract: Personalized recommendation of learning content is one of the most frequently cited benefits of personalized online learning. It is expected that with personalized content recommendation students will be able to build their own unique and optimal learning paths and to achieve course goals in the most optimal way. However, in many practical cases students search for learning content not to expand their knowledge, but to address problems encountered in the learning process, such as failures to solve a problem. In these cases, students could be better assisted by remedial recommendations focused on content that could help in resolving current problems. This paper presents a transparent and explainable interface for remedial recommendations in an online programming practice system. The interface was implemented to support SQL programming practice and evaluated in the context of a large database course. The paper summarizes the insights obtained from the study and discusses future work on remedial recommendations.

Summary (4 min read)

1 INTRODUCTION

  • Over the last decade, the issues of transparency and control in recommender systems have emerged as an important stream of research.
  • Research has shown that explanations can increase persuasiveness of the recommended items as well as users’ trust and satisfaction with the recommender system [17].
  • Here, the authors explore a relatively new class of remedial recommendations focused on helping to address problems encountered during the learning process.
  • The remaining part of the paper reviews the results of their study of this technology in a target educational context.

3.1 Interface

  • The core of the SQL Programming Practice System is the Mastery Grids interface which offers open learner modeling (OLM) [5] and provides access to several types of learning content [12].
  • The version of Mastery Grids interface for SQL is presented in Figure 1-A. Mastery Grids uses a topic-level approach to OLMwhere the course content is grouped into a set of topics (see Fig. 1-A).
  • The higher the opacity, the higher the progress of the learner in that topic.
  • In addition to the topic-level progress visualization, Mastery Grids shows the progress level for each type of content for each topic.
  • In addition, the interface recommends most appropriate learning activities to each learner in two ways: (1) highlighting recommended activities with stars on the grid of activities and (2) offering them as a ranked list on the left from the grid.

3.2 Learning Content

  • Mastery Grids provided access to three types of interactive practice content for learning SQL programming: annotated examples (labeled as Examples in Fig. 1-A), animated examples, and query problems.
  • Query animations visualize the execution of a query.
  • The aim of these examples is visually demonstrating how various query clauses are executed (step-by-step) to help students understand the semantics of the query.
  • These problems are served by the SQL-KnoT (Knowledge Tester) system [3].
  • Every time a student accesses a problem, the actual problem is randomly selected from a problem set associated with the template.

3.3 Student Modeling and Knowledge Level Visualization

  • Mastery Grids can depict the concept-level knowledge estimation as a bar chart (see Fig. 1-C&D).
  • Each bar represents the actual student’s knowledge level estimated by the system for a specific concept.
  • CUMULATE combines evidence generated from problem-solving attempts using an asymptotic function.
  • The concepts that are associated with each content are also highlighted similarly.
  • The details of the recommendation approach and student modeling are explained in the next section.

3.4 Educational Recommender System

  • The authors used a remedial recommendation approach, which focuses on suggesting learning activities that cover some of the concepts where students have exhibited some level of struggle [1, 2].
  • To generate remedial recommendations, the authors used the conceptlevel knowledge estimated by CUMULATE student model and concept-level success rate to calculate a difficulty score for each learning activity.
  • The details of the recommendation process can be found in [2].
  • The authors goal here is to examine the different effects that different explanation formats can have on students working with recommended content in an online learning environment.
  • It is important to mention that for this group, the visual OLM highlights the information about the associated concepts for the moused over activities, but not only for the recommended ones (i.e., when mousing over non-recommended activities they can see their knowledge state on associated concepts).

4 STUDY DESCRIPTION

  • The authors conducted a classroom study in Spring 2019 term at Aalto University, major research university in Finland, from February to May.
  • Mastery Grids was offered as a practice system to students who were enrolled in an undergraduate database management course named "CS-A1150 - Databases".
  • The course is a database management course covering topics such as relational modeling, relational algebra, UML modeling, SQL, transaction management, etc.
  • Not all students used the system, because it was a voluntary additional learning resource.
  • At the end of the semester, the post-test was administered.

5 RESULTS

  • At the end of the experiment, the authors noticed that most of the students’ activity in the Mastery Grids was registered during the last weeks of the course.
  • At that point of the term, the provided educational tools served more like a knowledge-confirmation instrument rather than one for accurately measuring their knowledge acquisition process.
  • Given this, for further analysis the authors only consider the subset of students who clearly exhibited needs for remediation at some point of their work with the system, reflected by: (a) not having a high success rate on their submissions, and (b) accessing to at least a couple of recommended remedial problems.
  • After filtering out students who did not exhibit troubles when solving problems, the authors ended up having 18 students in the TextualExp, 25 in the VisualExp, 20 in the NoExp and 20 in the DualExp group.
  • Table 1 shows the summary statistics for important usage variables after the filtering process mentioned above.

5.1 Persuasiveness of Recommendations

  • The authors defined three levels of engagement with the activities: (1) the probability to access them when they are moused over (access rate), (2) the probability of attempting them when these are opened (conversion rate), and (3) the probability of keepworking on the activities until solving it correctly (persistence rate).
  • A marginally significant interaction effect between the access/non-access to a visual explanation and the activity type (recommended/non-recommended) was found, F(1,74)=3.738, p=.057 (see left side of Fig. 5), also known as (2) Conversion rate.
  • This result suggests that in average, when students had access to visual explanations for the recommended content, their willingness to work on already-opened recommended activities (Mean conversion rate=.
  • The same marginally significant interaction effect was found between textual explanations and the activity type (recommended/non-recommended), F(1,74)=3.377, p=0.07.

5.2 Effects of Recommendations on Learning

  • In order to try to link the effect of the work on remedial recommendations to the final exam scores of students in the course, the authors calculated a multiple linear regression model including variables like pretest and proportion of attempted problems that were recommended ones.
  • No significant effect of interactions with remedial learning content was found in any of the four treatment groups.

5.3 Subjective feedback

  • At the end of the experiment, the authors asked students to fill a postquestionnaire which covered aspects like overall satisfaction with the system (3 items) and their perception about the quality of the remedial recommendations that were provided (5 items).
  • Based on the survey responses the authors performed a factor analysis, which resulted in the confirmation of the existence of the aforementioned two factors (satisfaction and recommendations quality).
  • In terms of overall satisfaction with the system, the authors did not find any significant differences in the average satisfaction score given by the learners (see Fig. 8).
  • The authors found a significant interaction effect of the two explanatory treatments (i.e., visual and textual explanations) on the final opinion of students about the quality of the given recommendations F(1,57)=4.669, p=.035 (see Fig. 9).
  • Combining this finding with the conversion rate results explained in section 5.1 sheds light about learners reactions to having at least one component in the interface that provides some information about how the problem recommendations were generated.

6 DISCUSSION AND CONCLUSIONS

  • In summary, their studies confirmed that the presence of remedial recommendation affects learning content selection behavior of students – recommended activities highlighted with stars were on average more attractive.
  • While this information correlates with past research on the impact of recommendations, it doesn’t provide reliable evidence of recommendation quality or impact since users are known to trust recommendations even when they are deliberately deceiving [10].
  • Another interesting finding is that learners were more persistent in problems (regardless if they were recommended or not) when they had access to full or no recommendations’ explanations rather instead of partial ones.
  • According to these data, remedial recommendations were perceived to be of better quality when they were justified by partial recommendation explanation (either only textual or visual) than when received none or complete explanations.
  • With these partial explanations, learners were able to only “partially” understand the underlying approach for generating those recommendations, which sometimes was not perfect given (a) the late stage of their learning process, and (b) the low accuracy of the student model.

Did you find this useful? Give us your feedback

Content maybe subject to copyright    Report

This is an electronic reprint of the original article.
This reprint may differ from the original in pagination and typographic detail.
Powered by TCPDF (www.tcpdf.org)
This material is protected by copyright and other intellectual property rights, and duplication or sale of all or
part of any of the repository collections is not permitted, except that material may be duplicated by you for
your research use or educational purposes in electronic or print form. You must obtain permission for any
other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not
an authorised user.
Barria-Pineda, Jordan; Akhuseyinoglu, Kamil; Brusilovsky, Peter; Pollari-Malmi, Kerttu; Sirkiä,
Teemu; Malmi, Lauri
Personalized Remedial Recommendations for SQL Programming Practice System
Published in:
UMAP 2020 Adjunct - Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and
Personalization
DOI:
10.1145/3386392.3399312
Published: 14/07/2020
Document Version
Peer reviewed version
Please cite the original version:
Barria-Pineda, J., Akhuseyinoglu, K., Brusilovsky, P., Pollari-Malmi, K., Sirkiä, T., & Malmi, L. (2020).
Personalized Remedial Recommendations for SQL Programming Practice System. In UMAP 2020 Adjunct -
Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization (pp. 135-
142). ACM. https://doi.org/10.1145/3386392.3399312

Personalized Remedial Recommendations for SQL
Programming Practice System
Jordan Barria-Pineda
Kamil Akhuseyinoglu
jab464@pitt.edu
kaa108@pitt.edu
University of Pittsburgh
Pittsburgh, USA
Peter Brusilovsky
University of Pittsburgh
Pittsburgh, USA
peterb@pitt.edu
Kerttu Pollari-Malmi
Aalto University
Espoo, Finland
kerttu@cs.hut.
Teemu Sirkiä
Aalto University
Espoo, Finland
teemu.sirkia@aalto.
Lauri Malmi
Aalto University
Espoo, Finland
lauri.malmi@aalto.
ABSTRACT
Personalized recommendation of learning content is one of the
most frequently cited benets of personalized online learning. It is
expected that with personalized content recommendation students
will be able to build their own unique and optimal learning paths
and to achieve course goals in the most optimal way. However, in
many practical cases students search for learning content not to ex-
pand their knowledge, but to address problems encountered in the
learning process, such as failures to solve a problem. In these cases,
students could be better assisted by remedial recommendations fo-
cused on content that could help in resolving current problems. This
paper presents a transparent and explainable interface for remedial
recommendations in an online programming practice system. The
interface was implemented to support SQL programming practice
and evaluated in the context of a large database course. The paper
summarizes the insights obtained from the study and discusses
future work on remedial recommendations.
CCS CONCEPTS
Information systems Recommender systems
;
Social
and professional topics Computer science education
;
Applied computing Interactive learning environments.
KEYWORDS
educational recommender systems, explainability, transparency
ACM Reference Format:
Jordan Barria-Pineda, Kamil Akhuseyinoglu, Peter Brusilovsky, Kerttu Pollari-
Malmi, Teemu Sirkiä, and Lauri Malmi. 2020. Personalized Remedial Recom-
mendations for SQL Programming Practice System. In Adjunct Proceedings
Both authors contributed equally to this research.
of the 28th ACM Conference on User Modeling, Adaptation and Personalization
(UMAP ’20 Adjunct), July 14–17, 2020, Genoa, Italy. ACM, New York, NY,
USA, 8 pages. https://doi.org/10.1145/3386392.3399312
1 INTRODUCTION
Over the last decade, the issues of transparency and control in rec-
ommender systems (RecSys) have emerged as an important stream
of research. One technology that has been studied in this context is
the explanation of recommendations. Research has shown that ex-
planations can increase persuasiveness of the recommended items
as well as users’ trust and satisfaction with the recommender sys-
tem [
17
]. Based on these results, guidelines have been developed for
designing and evaluating the benets of explanations [
16
]. Despite
of the increasing volume of research on explaining recommenda-
tions, this work has been predominantly focused on traditional
taste-based and interest-based recommendation in e-commerce and
media consumption systems and such items ad products, movies
or songs [
11
]. In this paper, we explore the problem of explaining
recommendations in a considerably dierent domain, e-learning,
where recommendations usually focus on user’s knowledge rather
than interests. Here, we explore a relatively new class of remedial
recommendations focused on helping to address problems encoun-
tered during the learning process. Following a brief review of related
work, we introduce a novel interface for explaining remedial rec-
ommendations. The remaining part of the paper reviews the results
of our study of this technology in a target educational context. We
conclude by discussing lessons learning and planning future work.
2 RELATED WORK
A eld that has been understudied in the RecSys context is the educa-
tional domain, where the main goal of a recommender system is to
support students’ learning by ltering educational content for the
dierent learning settings that dier from one individual student
to another [
18
]. Although there is a large body of research on Edu-
cational Recommender Systems (EdRecSys) [
6
], to the best of our
knowledge, there is no research work that has tried to investigate
the eects of explanations for students in EdRecSys contexts. Thus, it
is not clear how feasible it is to directly transfer the lessons learned
in other recommendation domains into this context. The closest

attempt we have identied is the work of Putnam and Conati [
13
],
which studied students’ perceptions and attitudes toward explana-
tions for automatically generated hints in an Intelligent Tutoring
System scenario.
The aforementioned gap is important to address because EdRec-
Sys are dierent from conventional RecSys as their main goal is sup-
porting students’ learning [
6
]. Thus, not only the interests/preferences
of the end-users (students) are important to generate the recom-
mendations, the level of domain knowledge at each stage of their
learning is crucial for suggesting appropriate learning activities to
each student [
18
]. Hence, this dierence makes it necessary to (1)
include students and instructors in the design of the recommen-
dation approach since its conception [
14
], and (2) dene proper
evaluation metrics to assess the recommendations eectiveness
which must include students’ learning [7].
The implementation of EdRe cSys stresses how critical it is to
consider that each student is unique in terms of her “knowledge
readiness” for attempting learning activities. Likewise, we hypoth-
esize that it is also important to ponder the students’ ability to
process recommendations and understand potential explanations.
In other scenarios, researchers have discovered that the explana-
tions’ level of detail can aect users’ mental models with both
positive and negative eects [
11
]. Additionally, they have found
that explanations’ complexity (e.g., depth and visual format) could
either help or burden users’ understanding [15].
Altogether, there is an evident lack of research on the eects
of transparency in the context of educational recommendations.
Exploring the potential benets/drawbacks of recommendations
and their explanations will contribute to improving the future of
adaptive and personalized online learning. In this paper, we aim to
ll this research gap through an empirical classroom study.
3 SQL PROGRAMMING PRACTICE SYSTEM
WITH REMEDIAL RECOMMENDATIONS
3.1 Interface
The core of the SQL Programming Practice System is the Mastery
Grids interface which oers open learner modeling (OLM) [
5
] and
provides access to several types of learning content [
12
]. The ver-
sion of Mastery Grids interface for SQL is presented in Figure 1-A.
Mastery Grids uses a topic-level approach to OLM where the course
content is grouped into a set of topics (see Fig. 1-A). The level of
progress for each topic is visualized using a color opacity. The
higher the opacity, the higher the progress of the learner in that
topic. In addition to the topic-level progress visualization, Mas-
tery Grids shows the progress level for each type of content for
each topic. In Fig. 1-A, the available practice content for the topic
SELECT-FROM is shown, as well as the associated progress level for
each content. In addition, the interface recommends most appropri-
ate learning activities to each learner in two ways: (1) highlighting
recommended activities with stars on the grid of activities and (2)
oering them as a ranked list on the left from the grid. Students
can access the learning content by clicking on an activity cell or a
line of the ranked list.
3.2 Learning Content
In this study, Mastery Grids provided access to three types of inter-
active practice content for learning SQL programming: annotated
examples (labeled as Examples in Fig. 1-A), animated examples,
and query problems. Annotated examples provide step by step text
explanations to SQL query statements, which are delivered by the
WebEx system [
4
]. Query animations visualize the execution of a
query. The aim of these examples is visually demonstrating how
various query clauses are executed (step-by-step) to help students
understand the semantics of the query. Finally, query problems
require students to write an SQL query to solve the given problem
prompt using the associated database schema. The correctness of
the query is evaluated against a model solution using the sample
database and immediate feedback is provided. These problems are
served by the SQL-KnoT (Knowledge Tester) system [
3
]. SQL-KnoT
leverages template-based problem-generation. Every time a student
accesses a problem, the actual problem is randomly selected from a
problem set associated with the template. SQL-KnoT problems are
critical for the study because the knowledge-level of a student is
updated based on her attempts on these problems.
3.3 Student Modeling and Knowledge Level
Visualization
Mastery Grids can depict the concept-level knowledge estimation as
a bar chart (see Fig. 1-C&D). Each bar represents the actual student’s
knowledge level estimated by the system for a specic concept. Ini-
tially, all bar lengths are set to 0 and start to increase based on the
successful problem-solving attempts. The knowledge estimates are
calculated by the CUMULATE user modeling server [
19
]. CUMU-
LATE combines evidence generated from problem-solving attempts
using an asymptotic function. This asymptotic function is used to
calculate the probability of a learner mastering a concept. The prob-
ability of mastery increases with each successful attempt. CUMU-
LATE does not take into consideration wrong attempts. Therefore,
there is no decrease in knowledge level (i.e., there’s no penalty in
the student model) even if a student fails. The asymptotic nature
of the CUMULATE student modeling function implies that when
a student starts studying a new concept, learning gains are high;
however, these learning gains rapidly become smaller as the student
becomes more procient on the subject.
The concepts are grouped and arranged along the x-axis by
topic, according to the order in which topics are covered in the
course (see Fig. 1-C&D). When students mouse over a grid cell
that represents a topic, the interface highlights the concepts in the
bar chart that the topic covers. Learners can check their estimated
knowledge of the related concepts in a specic topic to the presence
or absence of bars in the chart. The concepts that are associated
with each content are also highlighted similarly. The height of each
bar indicates the estimated level of knowledge of the student. We
also used a second visual encoding variable to represent the level of
struggle of a specic concept. This variable is color, and we dened
a color scale going from red to green. The bar color gets greener
with a higher success rate, and it gets gray if the concept has not
been practiced recently. If a concept is labeled as struggling (see
section 3.4.1), the system depicts it with a warning sign shown on
top of the concept bar, as shown in Fig. 1-C&D.

Presenting the current progress level provides navigational sup-
port to the students. In our previous study [8], we introduced per-
sonalized recommendation approaches to improve existing navi-
gational support. The top three recommended content items were
highlighted using red stars on colored cells for topics and con-
tent. This way of representing recommended items does not force
students to follow the recommendations but rather help them to
combine both progress information and recommendation to decide
their next action step. Originally, Mastery Grids does not provide
any hint or explanation for a given recommendation. In our previ-
ous work [
1
], the interface was redesigned to connect recommen-
dations with a ner-grained concept-level OLM and explored an
approach to explain the learning content recommendations. Fol-
lowing that study, we shared our new system design and the study
plan in [
2
] where we focused on producing remedial recommen-
dations to support struggling students. Moreover, we introduced a
simpler recommendation approach and reduced the complexity of
the student modeling service. The details of the recommendation
approach and student modeling are explained in the next section.
Following [
2
], in the current paper, we share the results of that
study.
3.4 Educational Recommender System
3.4.1 Recommendation Approach. In this study, we used a remedial
recommendation approach, which focuses on suggesting learning
activities that cover some of the concepts where students have
exhibited some level of struggle [
1
,
2
]. In other words, remedial
recommendations should target the concepts with which a student
struggled recently.
To generate remedial recommendations, we used the concept-
level knowledge estimated by CUMULATE student model and
concept-level success rate to calculate a diculty score for each
learning activity. The diculty score di
𝑖 𝑗
of an activity i for stu-
dent j is calculated by equation 1:
di
𝑖 𝑗
=
1
Í
𝑘
𝑤
𝑘
Õ
𝑘
𝑤
𝑘
𝛼 𝑄
𝑘 𝑗
+ (1 𝛼) 𝑠
𝑘 𝑗
(1)
where k is a concept associated with activity i,
𝑄
𝑘 𝑗
is knowledge
level estimate,
𝑠
𝑘 𝑗
is the success rate of student j on concept k, and
𝑤
𝑘
is the topic-level importance of the concept calculated by using
tf-idf approach (i.e., the more unique a concept in a topic, the higher
its importance). We considered each problem-solving attempt as an
opportunity for the concepts associated with it and calculated the
average success rate per concept in last t attempts. For this study, t
is set to 10 and
𝛼
is set to 0.5 to put equal importance on knowledge
level and the success rate.
To focus on struggled concepts, we eliminate learning activities
that do not have any struggled concepts from the recommendation
process. We dened a concept as struggling if students started to fail
on problems that cover the concept. Specically, using the concept-
based success rate (
𝑠
𝑘 𝑗
), we dened a concept as struggling if
𝑠
𝑘 𝑗
<
0.5. As
𝑠
𝑘 𝑗
is calculated by using the last t attempts, the system will
not label a concept as struggling if the student starts to perform well
(assuming the success rate goes above 0.5). We further calculated
the median diculty score after each attempt to specify suitable
activities to recommend as remediation, i.e. learning content that
is not too hard but at the same time not too easy. We hypothesized
that activities which reside at median level diculty for a student
would not be so hard or so easy to lead any further hardship or
discouragement. The details of the recommendation process can be
found in [2].
3.4.2 Explanations for Learning Content Recommendations. Given
our past work on communicating learners the reasons behind sug-
gesting them learning activities [
1
], we decided to dene dierent
experimental treatments by combining textual and visual explana-
tory elements. Our goal here is to examine the dierent eects that
dierent explanation formats can have on students working with
recommended content in an online learning environment. Thus,
we dened 4 treatment groups that are explained here:
(1)
NoExp group: In this group, no explanation is provided to the
students when mousing over a recommended activity (see A
in Fig. 1). Thus, learners do not know why a specic learning
material was suggested to them.
(2)
TextualExp group: Only an explanation based on natural lan-
guage is provided to these students when mousing over rec-
ommended activities (see B in Fig. 1). This explanation format
textually describes (a) how many struggling concepts the learn-
ing content is covering and, at the same time, (b) how many
concepts in that specic activity the student has shown a good
prociency-level (which makes it more approachable to solve
the ongoing misconceptions).
(3)
VisualExp group: Here, a concept-based OLM is used as a visual
explanatory component when mousing over recommended ac-
tivities (see C in Fig. 1). By examining their own OLMs, students
are able to know how many struggling concepts they have (as
they are highlighted with warning signs) and their respective
level of prociency on each of the concepts covered by the activ-
ity. Each concept bar visualizes the learner’s knowledge-state
by means of two graphic variables:
length: shows the cumulative estimation of knowledge of the
student, which is calculated based in the historical perfor-
mance on problems involving that concept.
color: shows the success rate on the most recent attempts
on problems that include that concept, by using a scale that
ranges from red (0%) to green (100%) and using yellow as an
intermediate point (50 %).
It is important to mention that for this group, the visual OLM
highlights the information about the associated concepts for
the moused over activities, but not only for the recommended
ones (i.e., when mousing over non-recommended activities they
can see their knowledge state on associated concepts).
(4)
DualExp group: In this version of the interface both textual and
visual explanatory components explained above are shown to
the students when mousing over recommended activities (see
D in Fig. 1). We hypothesize that by having these explanatory
components together, learners could get a clearer picture about
why the recommendation algorithm selected those specic ac-
tivities to remediate their misconceptions (balance between
struggling concepts and concepts where they are procient).
4 STUDY DESCRIPTION
We conducted a classroom study in Spring 2019 term at Aalto Uni-
versity, major research university in Finland, from February to May.

Figure 1: Four dierent experimental treatments which combine textual and visual elements for explaining recommenda-
tions in Mastery Grids: (A) No explanation (B) Textual explanation only (C) Visual explanation only (D) Visual and textual
explanation combined
Mastery Grids was oered as a practice system to students who
were enrolled in an undergraduate database management course
named "CS-A1150 - Databases". The course is a database manage-
ment course covering topics such as relational modeling, relational
algebra, UML modeling, SQL, transaction management, etc. The
course is compulsory for Computer Science and Industrial Engi-
neering and Management majors and highly recommended for
Computer Science minors. The course is also taken by many other
students in various bachelor’s and master’s programs. In total over
550 students enrolled the course. In the beginning of the course,
we informed students about this research and asked them to give
their consent to participate in it. The data used in this research
is from those students who gave their consent and were engaged
with Mastery Grids in some way. Not all students used the system,
because it was a voluntary additional learning resource.
Mastery Grids content is designed to help students to practice
on SQL topics and it was accessed through a direct link from A+
course management system [
9
], which was used to deliver other
contents and exercises of the course. The use of Mastery Grids was
not-mandatory but to encourage participation 20 extra exercise
points, which were about 7.5 % of all exercise points available, were
given to students if they solved 2 SQL problems per topic.
In this study, we followed a pre/post test design to examine the
learning gain throughout the semester. Before the SQL topics were
introduced, the pretest was administered. At the end of the semester,
the post-test was administered. Both tests include 10 problems, 5
multiple-choice and 5 SQL ll in the blank problems covering data
denition, data query and data manipulation SQL commands related
to a given database schema. Post-test problems were isomorphic
to the pretest. However, in our analysis, we realized that students
do not spend enough time on post-test problems and decided to
use the nal exam grades instead of post-test scores. Moreover,
students were asked to complete a questionnaire related to Mastery
Grids usage at the end of the semester. The questionnaire consisted
of questions related to system satisfaction, interface features and
recommendation quality. To encourage the students to take the
pre/post-tests and complete the questionnaire, 4 exercise points
and 1 exam bonus point (out of total 40 exam points) were given to
the students who completed them.
5 RESULTS
At the end of the experiment, we noticed that most of the students’
activity in the Mastery Grids was registered during the last weeks
of the course. At that point of the term, the provided educational
tools served more like a knowledge-conrmation instrument rather
than one for accurately measuring their knowledge acquisition
process. Given this, for further analysis we only consider the subset
of students who clearly exhibited needs for remediation at some
point of their work with the system, reected by: (a) not having
a high success rate on their submissions, and (b) accessing to at
least a couple of recommended remedial problems. Thus, in the
subsequent analysis, we only considered students with an average
success rate lower than 75% and with at least more than 2 attempts
on recommended problems. After ltering out students who did

References
More filters
Journal ArticleDOI
TL;DR: In this article, the authors present a context framework that identifies relevant context dimensions for TEL applications and present an analysis of existing TEL recommender systems along these dimensions, based on their survey results, they outline topics on which further research is needed.
Abstract: Recommender systems have been researched extensively by the Technology Enhanced Learning (TEL) community during the last decade. By identifying suitable resources from a potentially overwhelming variety of choices, such systems offer a promising approach to facilitate both learning and teaching tasks. As learning is taking place in extremely diverse and rich environments, the incorporation of contextual information about the user in the recommendation process has attracted major interest. Such contextualization is researched as a paradigm for building intelligent systems that can better predict and anticipate the needs of users, and act more efficiently in response to their behavior. In this paper, we try to assess the degree to which current work in TEL recommender systems has achieved this, as well as outline areas in which further work is needed. First, we present a context framework that identifies relevant context dimensions for TEL applications. Then, we present an analysis of existing TEL recommender systems along these dimensions. Finally, based on our survey results, we outline topics on which further research is needed.

527 citations

Book ChapterDOI
01 Jan 2011
TL;DR: This chapter gives an overview of the area of explanations in recommender systems, and approaches the literature from the angle of evaluation: that is, what makes an explanation “good”, and suggest guidelines as how to best evaluate this.
Abstract: This chapter gives an overview of the area of explanations in recommender systems. We approach the literature from the angle of evaluation: that is, we are interested in what makes an explanation “good”, and suggest guidelines as how to best evaluate this. We identify seven benefits that explanations may contribute to a recommender system, and relate them to criteria used in evaluations of explanations in existing systems, and how these relate to evaluations with live recommender systems. We also discuss how explanations can be affected by how recommendations are presented, and the role the interaction with the recommender system plays w.r.t. explanations. Finally, we describe a number of explanation styles, and how they may be related to the underlying algorithms. Examples of explanations in existing systems are mentioned throughout.

334 citations


"Personalized Remedial Recommendatio..." refers background in this paper

  • ...Based on these results, guidelines have been developed for designing and evaluating the benefits of explanations [16]....

    [...]

Journal ArticleDOI
TL;DR: This paper focuses particularly on effectiveness (helping users to make good decisions) and its trade-off with satisfaction and provides an overview of existing work on evaluating effectiveness and the metrics used.
Abstract: When recommender systems present items, these can be accompanied by explanatory information. Such explanations can serve seven aims: effectiveness, satisfaction, transparency, scrutability, trust, persuasiveness, and efficiency. These aims can be incompatible, so any evaluation needs to state which aim is being investigated and use appropriate metrics. This paper focuses particularly on effectiveness (helping users to make good decisions) and its trade-off with satisfaction. It provides an overview of existing work on evaluating effectiveness and the metrics used. It also highlights the limitations of the existing effectiveness metrics, in particular the effects of under- and overestimation and recommendation domain. In addition to this methodological contribution, the paper presents four empirical studies in two domains: movies and cameras. These studies investigate the impact of personalizing simple feature-based explanations on effectiveness and satisfaction. Both approximated and real effectiveness is investigated. Contrary to expectation, personalization was detrimental to effectiveness, though it may improve user satisfaction. The studies also highlighted the importance of considering opt-out rates and the underlying rating distribution when evaluating effectiveness.

288 citations

Proceedings ArticleDOI
24 Oct 2013
TL;DR: It is suggested that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations.
Abstract: Research is emerging on how end users can correct mistakes their intelligent agents make, but before users can correctly “debug” an intelligent agent, they need some degree of understanding of how it works. In this paper we consider ways intelligent agents should explain themselves to end users, especially focusing on how the soundness and completeness of the explanations impacts the fidelity of end users' mental models. Our findings suggest that completeness is more important than soundness: increasing completeness via certain information types helped participants' mental models and, surprisingly, their perception of the cost/benefit tradeoff of attending to the explanations. We also found that oversimplification, as per many commercial agents, can be a problem: when soundness was very low, participants experienced more mental demand and lost trust in the explanations, thereby reducing the likelihood that users will pay attention to such explanations at all.

279 citations


"Personalized Remedial Recommendatio..." refers background in this paper

  • ...media consumption systems and such items ad products, movies or songs [11]....

    [...]

  • ...In other scenarios, researchers have discovered that the explanations’ level of detail can affect users’ mental models with both positive and negative effects [11]....

    [...]

Book ChapterDOI
01 Jan 2010
TL;DR: The range of purposes that Open Learner Models can serve are described, illustrating these with diverse examples of the ways that they have been made available in several research systems.
Abstract: An Open Learner Model makes a machines’ representation of the learner available as an important means of support for learning. This means that a suitable interface is created for use by learners, and in some cases for others who aid their learning, including peers, parents and teachers. The chapter describes the range of purposes that Open Learner Models can serve, illustrating these with diverse examples of the ways that they have been made available in several research systems. We then discuss the closely related issues of openness and learner control and the ways that have been explored to support learning by making the learner model available to people other than the learner. This chapter provides a foundation for understanding the range of ways that Open Learner Models have already been used to support learning as well as directions yet to be explored.

230 citations


"Personalized Remedial Recommendatio..." refers background in this paper

  • ...The core of the SQL Programming Practice System is the Mastery Grids interface which offers open learner modeling (OLM) [5] and provides access to several types of learning content [12]....

    [...]

  • ...(3) VisualExp group: Here, a concept-based OLM is used as a visual explanatory component when mousing over recommended activities (see C in Fig....

    [...]

  • ...The version of Mastery Grids interface for SQL is presented in Figure 1-A. Mastery Grids uses a topic-level approach to OLMwhere the course content is grouped into a set of topics (see Fig....

    [...]

  • ...It is important to mention that for this group, the visual OLM highlights the information about the associated concepts for the moused over activities, but not only for the recommended ones (i.e., when mousing over non-recommended activities they can see their knowledge state on associated concepts)....

    [...]

  • ...By examining their own OLMs, students are able to know how many struggling concepts they have (as they are highlighted with warning signs) and their respective level of proficiency on each of the concepts covered by the activity....

    [...]

Frequently Asked Questions (2)
Q1. What contributions have the authors mentioned in the paper "Personalized remedial recommendations for sql programming practice system" ?

This paper presents a transparent and explainable interface for remedial recommendations in an online programming practice system. The paper summarizes the insights obtained from the study and discusses future work on remedial recommendations. 

For example, the authors did not get insights from students that were presented with remedial recommendations in Mastery Grids but never accessed or attempted the recommended content - which is one of the target aspects to explore in future studies. In this way, the authors will be able to measure the effects of having this remedial recommender support for students throughout all the incremental stages in the course.