Visually grounded and textual semantic models differentially decode brain activity associated with concrete and abstract nouns
Citations
251 citations
229 citations
70 citations
Additional excerpts
...Anderson et al. (2017) recently corroborated this hypothesis using semantic representations and fMRI studies....
[...]
56 citations
Cites background from "Visually grounded and textual seman..."
..., 2016) word similarity (Anderson et al., 2017) decoding brain activity (Glavas et al....
[...]
References
83,420 citations
"Visually grounded and textual seman..." refers background in this paper
...In all tests (see Table 3) the individual-level accuracies were significantly different (lower) than the grouplevel accuracy (corrected for multiple comparisons using false discovery rate (Benjamini and Hochberg, 1995))....
[...]
20,077 citations
12,531 citations
10,161 citations
"Visually grounded and textual seman..." refers methods in this paper
...The image- and text-based computational models we use have recently been developed using neural networks (Mikolov et al., 2013; Jia et al., 2014)....
[...]
...Image representations are obtained by extracting the pre-softmax layer from a forward pass in a convolutional neural network (CNN) that has been trained on the ImageNet classification task using Caffe (Jia et al., 2014)....
[...]
9,270 citations
"Visually grounded and textual seman..." refers methods in this paper
...For linguistic input, we use the continuous vector representations from the skip-gram model of Mikolov et al. (2013)....
[...]
...The image- and text-based computational models we use have recently been developed using neural networks (Mikolov et al., 2013; Jia et al., 2014)....
[...]