scispace - formally typeset
Open AccessJournal ArticleDOI

Predicting Students’ Academic Performance with Conditional Generative Adversarial Network and Deep SVM

Reads0
Chats0
TLDR
This study proposes an improved conditional generative adversarial network (CGAN) in combination with a deep-layer-based support vector machine (SVM) to predict students’ performance through school and home tutoring and indicates that school andHome tutoring combined have a positive impact on students” performance when the model is trained after applying CGAN.
Abstract
The availability of educational data obtained by technology-assisted learning platforms can potentially be used to mine student behavior in order to address their problems and enhance the learning process. Educational data mining provides insights for professionals to make appropriate decisions. Learning platforms complement traditional learning environments and provide an opportunity to analyze students’ performance, thus mitigating the probability of student failures. Predicting students’ academic performance has become an important research area to take timely corrective actions, thereby increasing the efficacy of education systems. This study proposes an improved conditional generative adversarial network (CGAN) in combination with a deep-layer-based support vector machine (SVM) to predict students’ performance through school and home tutoring. Students’ educational datasets are predominantly small in size; to handle this problem, synthetic data samples are generated by an improved CGAN. To prove its effectiveness, results are compared with and without applying CGAN. Results indicate that school and home tutoring combined have a positive impact on students’ performance when the model is trained after applying CGAN. For an extensive evaluation of deep SVM, multiple kernel-based approaches are investigated, including radial, linear, sigmoid, and polynomial functions, and their performance is analyzed. The proposed improved CGAN coupled with deep SVM outperforms in terms of sensitivity, specificity, and area under the curve when compared with solutions from the existing literature.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Electroencephalogram Signals for Detecting Confused Students in Online Education Platforms with Probability-Based Features

TL;DR: Experimental results suggest that by using the PBF approach on EEG data, a 100% accuracy can be obtained for detecting confused students and K-fold cross-validation and performance comparison with existing approaches further corroborates the results.
Journal ArticleDOI

Clustering of LMS Use Strategies with Autoencoders

TL;DR: In this article , a method that determines teaching style in an unsupervised way from the course structure and use patterns is proposed, where an autoencoder is used to reduce the dimensionality of the input data, while extracting the most important characteristics.
References
More filters
Journal ArticleDOI

Greedy function approximation: A gradient boosting machine.

TL;DR: A general gradient descent boosting paradigm is developed for additive expansions based on any fitting criterion, and specific algorithms are presented for least-squares, least absolute deviation, and Huber-M loss functions for regression, and multiclass logistic likelihood for classification.
Posted Content

Conditional Generative Adversarial Nets

Mehdi Mirza, +1 more
- 06 Nov 2014 - 
TL;DR: The conditional version of generative adversarial nets is introduced, which can be constructed by simply feeding the data, y, to the generator and discriminator, and it is shown that this model can generate MNIST digits conditioned on class labels.
Proceedings Article

Conditional image synthesis with auxiliary classifier GANs

TL;DR: A variant of GANs employing label conditioning that results in 128 x 128 resolution image samples exhibiting global coherence is constructed and it is demonstrated that high resolution samples provide class information not present in low resolution samples.
Proceedings Article

InfoGAN: interpretable representation learning by information maximizing generative adversarial nets

TL;DR: InfoGAN as mentioned in this paper is an information-theoretic extension to the GAN that is able to learn disentangled representations in a completely unsupervised manner, and it also discovers visual concepts that include hair styles, presence of eyeglasses, and emotions on the CelebA face dataset.
Journal Article

Multiple Kernel Learning Algorithms

TL;DR: Overall, using multiple kernels instead of a single one is useful and it is believed that combining kernels in a nonlinear or data-dependent way seems more promising than linear combination in fusing information provided by simple linear kernels, whereas linear methods are more reasonable when combining complex Gaussian kernels.
Related Papers (5)