Learning Program Embeddings to Propagate Feedback on Student Code
Citations
24 citations
Cites background from "Learning Program Embeddings to Prop..."
...Recent efforts have focused on providing feedback to students about their programs by leveraging structural similarities in the code itself to allow feedback to be provided to many assignments at once that share particular features [23, 25]....
[...]
24 citations
Cites background from "Learning Program Embeddings to Prop..."
...While first approaches exist to transform student data into a vectorial format, most data is still only available as structured data, such as sequences, trees or graphs [13]....
[...]
23 citations
Cites methods from "Learning Program Embeddings to Prop..."
...The clusters are created either using heuristics based on program analysis techniques [8, 15, 10, 27, 23] or using program execution on a set of inputs [19, 20]....
[...]
...The clusters are typically used in the following two ways: (1) the feedback is generated manually for a representative program in each cluster and then customized to other members of the cluster automatically [19, 20, 8], and (2) for a buggy program, a correct program is selected from the same cluster as a reference implementation, which is then compared to the buggy program to generate a repair hint [15, 10, 27, 23]....
[...]
22 citations
Cites methods from "Learning Program Embeddings to Prop..."
...g data. One promising direction is to neurally analyze software based on runtime information. So far, almost all existing work focuses on static neural software analysis, with some notable exceptions [30, 37]. Better models. A core concern of every neural software analysis is how to represent software as vectors that enable a neural model to reason about the software. Learned representations of code are a...
[...]
22 citations
References
7,244 citations
6,984 citations
Additional excerpts
...Learning rates are set using Adagrad (Duchi et al., 2011)....
[...]
6,935 citations
"Learning Program Embeddings to Prop..." refers methods in this paper
...We use random search (Bergstra & Bengio, 2012) to optimize over hyperparameters (e.g, regularization parameters, matrix dimensions, and minibatch size)....
[...]
6,792 citations
"Learning Program Embeddings to Prop..." refers background or methods in this paper
...The programs for these assignments operate in maze worlds where an agent can move, turn, and test for conditions of its current location....
[...]
...Our models are related to recent work from the NLP and deep learning communities on recursive neural networks, particularly for modeling semantics in sentences or symbolic expressions (Socher et al., 2013; 2011; Zaremba et al., 2014; Bowman, 2013)....
[...]
...…on recursive neural networks (called the NPM-RNN model) in which we parametrize a matrix MA in this new model with an RNN whose architecture follows the abstract syntax tree (similar to the way in which RNN architectures might take the form of a parse tree in an NLP setting (Socher et al., 2013))....
[...]
[...]
5,171 citations