Open AccessProceedings Article
Learning Program Embeddings to Propagate Feedback on Student Code
Chris Piech,Jonathan Huang,Andy Nguyen,Mike Phulsuksombati,Mehran Sahami,Leonidas J. Guibas +5 more
- pp 1093-1102
TLDR
A neural network method is introduced to encode programs as a linear mapping from an embedded precondition space to an embedded postcondition space and an algorithm for feedback at scale is proposed using these linear maps as features.Abstract:
Providing feedback, both assessing final work and giving hints to stuck students, is difficult for open-ended assignments in massive online classes which can range from thousands to millions of students. We introduce a neural network method to encode programs as a linear mapping from an embedded precondition space to an embedded postcondition space and propose an algorithm for feedback at scale using these linear maps as features. We apply our algorithm to assessments from the Code.org Hour of Code and Stanford University's CS1 course, where we propagate human comments on student assignments to orders of magnitude more submissions.read more
Citations
More filters
Posted Content
Teaching Temporal Logics to Neural Networks
TL;DR: The Transformer generalized from imperfect training data to the semantics of LTL, and the results were surprising: the Transformer returns the syntactically equivalent trace in 89% of the cases on a held-out test set.
Proceedings ArticleDOI
CodeMend: Assisting Interactive Programming with Bimodal Embedding
TL;DR: This work presents CodeMend, a system to support finding and integration of code, which leverages a neural embedding model to jointly model natural language and code as mined from large Web and code datasets.
Journal ArticleDOI
Analyzing bug fix for automatic bug cause classification
TL;DR: A new model to exploit the knowledge in the bug fix by constructing fix trees from the diff source code at Abstract Syntax Tree (AST) level, and representing each fix tree based on the encoding method of Tree-based Convolutional Neural Network (TBCNN).
Journal ArticleDOI
A Comparison of the Quality of Data-Driven Programming Hint Generation Algorithms
Thomas W. Price,Yihuan Dong,Rui Zhi,Benjamin Paaßen,Nicholas Lytle,Veronica Cateté,Tiffany Barnes +6 more
TL;DR: This work presents the QualityScore procedure, a novel method for automatically evaluating and comparing the quality of next-step programming hints using expert ratings, and demonstrates that the automated QualityScore ratings agree with experts’ manual ratings.
Proceedings ArticleDOI
Providing Meaningful Feedback for Autograding of Programming Assignments
Georgiana Haldeman,Andrew Tjang,Monica Babes-Vroman,Stephen A. Bartos,Jay V. Shah,Danielle Yucht,Thu D. Nguyen +6 more
TL;DR: A methodology for extending autograders to provide meaningful feedback for incorrect programs and it is found that the hints given for erroneous submissions should be helpful for 96% or more of the cases.
References
More filters
Proceedings Article
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization.
TL;DR: Adaptive subgradient methods as discussed by the authors dynamically incorporate knowledge of the geometry of the data observed in earlier iterations to perform more informative gradient-based learning, which allows us to find needles in haystacks in the form of very predictive but rarely seen features.
Journal Article
Adaptive Subgradient Methods for Online Learning and Stochastic Optimization
TL;DR: This work describes and analyze an apparatus for adaptively modifying the proximal function, which significantly simplifies setting a learning rate and results in regret guarantees that are provably as good as the best proximal functions that can be chosen in hindsight.
Journal Article
Random search for hyper-parameter optimization
James Bergstra,Yoshua Bengio +1 more
TL;DR: This paper shows empirically and theoretically that randomly chosen trials are more efficient for hyper-parameter optimization than trials on a grid, and shows that random search is a natural baseline against which to judge progress in the development of adaptive (sequential) hyper- parameter optimization algorithms.
Proceedings Article
Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank
Richard Socher,Alex Perelygin,Jean Y. Wu,Jason Chuang,Christopher D. Manning,Andrew Y. Ng,Christopher Potts +6 more
TL;DR: A Sentiment Treebank that includes fine grained sentiment labels for 215,154 phrases in the parse trees of 11,855 sentences and presents new challenges for sentiment compositionality, and introduces the Recursive Neural Tensor Network.
Book
A complexity measure
TL;DR: In this paper, a graph-theoretic complexity measure for managing and controlling program complexity is presented. But the complexity is independent of physical size, and complexity depends only on the decision structure of a program.