Understanding Hidden Memories of Recurrent Neural Networks
Citations
501 citations
Cites background from "Understanding Hidden Memories of Re..."
...The visualization of RNNs is more complicated than CNNs as an input may arouse response from every neuron, and a hidden-state neuron may be highly responsive to a set of words, forming many-to-many relationships [Ming et al., 2017]....
[...]
...Visualization Tools for RNNs The visualization of RNNs is more complicated than CNNs, as an input may arouse a response from every neuron, and a hidden neuron may be highly responsive to a number of sequences, forming many-to-many relationships (Ming et al., 2017)....
[...]
...Considering these complications, Ming et al. (2017) designed a more advanced visualization system composed of three parts: (1) They calculated expected memory neuron responses to each input word, which is the average of responses to all occurrences of that word from a training database; (2) They…...
[...]
483 citations
Cites background or methods from "Understanding Hidden Memories of Re..."
...Similarly to comparing model architectures, some systems solely rely data visualization representations and encodings in order to compare models [43], while others compare different snapshots of a single model as it trains over time, i....
[...]
...While at first this does not seem like a major differentiation from before, instance groups provide some unique advantages [39], [43]....
[...]
...It is encouraging to see many of the visual analytics systems recognize this importance and report on design studies conducted with AI experts before building a tool to understand the users and their needs [14], [15], [39], [43], [52]....
[...]
...Another system, RNNVis [43], visualizes and compares different RNN models for various natural language processing tasks....
[...]
442 citations
296 citations
291 citations
Cites methods from "Understanding Hidden Memories of Re..."
...2017 [143] ◆ ◆ ◆...
[...]
...In the case of recurrent neural networks (RNN), LSTMVis [193] and RNNVis [143] are tools to interpret RNN models for natural language processing tasks....
[...]
References
72,897 citations
[...]
38,208 citations
20,027 citations
"Understanding Hidden Memories of Re..." refers background in this paper
...[2] applied the attention in machine translation and showed the relationship between source and target sentences....
[...]
14,077 citations
12,783 citations
"Understanding Hidden Memories of Re..." refers background or methods in this paper
..., the state-of-the-art performance on the ImageNet benchmark in 2013 proposed by Zeiler & Fergus [52])....
[...]
...These studies (Zeiler & Fergus [52], Dosovitskiy & Brox [10]) provided researchers with insights of neurons’ learned features and inspired designs of better network architectures (e.g., the state-of-the-art performance on the ImageNet benchmark in 2013 proposed by Zeiler & Fergus [52])....
[...]
...These studies (Zeiler & Fergus [52], Dosovitskiy & Brox [10]) provided researchers with insights of neurons’ learned features and inspired designs of better network architectures (e....
[...]
...In CNNs, the features learned by neurons can be visually explained using images derived by activation maximization [12] or code inversion [52], since the input space of CNN is continuous....
[...]