Refining Word Embeddings for Sentiment Analysis
Citations
2,466 citations
Cites background from "Refining Word Embeddings for Sentim..."
...The ability to generate similar sentences to unseen real data is considered a measurement of quality (Yu et al., 2017)....
[...]
...[146] TREE-LSTM WiTH REfiNED WoRD EMBEDDiNgS 54....
[...]
...(Yu et al., 2017) proposed to bypass this problem by modeling the generator as a stochastic policy....
[...]
...[146] proposed to refine pre-trained word embeddings with a sentiment lexicon, observing improved results based on [105]....
[...]
917 citations
315 citations
135 citations
Cites background from "Refining Word Embeddings for Sentim..."
...It contains 13,915 words, each associated with a real-valued score in [1], [9] for the dimensions of valence, arousal and dominance....
[...]
105 citations
Cites methods from "Refining Word Embeddings for Sentim..."
...Similarly, we use the technique of Yu et al. (2017) for refining GloVe embeddings for sentiment, and evaluate model performance on the SST task....
[...]
References
30,558 citations
Additional excerpts
...Examples include C&W (Collobert and Weston, 2008; Collobert et al., 2011), Word2vec (Mikolov et al., 2013a; 2013b) and GloVe (Pennington et al., 2014)....
[...]
24,012 citations
Additional excerpts
..., 2011), Word2vec (Mikolov et al., 2013a; 2013b) and GloVe (Pennington et al....
[...]
...Examples include C&W (Collobert and Weston, 2008; Collobert et al., 2011), Word2vec (Mikolov et al., 2013a; 2013b) and GloVe (Pennington et al., 2014)....
[...]
20,077 citations
15,068 citations
"Refining Word Embeddings for Sentim..." refers background in this paper
...In addition to the contextual information, characterlevel subwords (Bojanowski et al., 2016) and semantic knowledge resources (Faruqui et al., 2015; Kiela et al., 2015) such as WordNet (Miller, 1995) are also useful information for learning word embeddings....
[...]
..., 2015) such as WordNet (Miller, 1995) are also useful information for learning word embeddings....
[...]
...In addition to the E-ANEW, other lexicons such as SentiWordNet (Esuli and Fabrizio, 2006), SoCal (Taboada et al., 2011), SentiStrength (Thelwall et al., 2012), Vader (Hutto et al., 2014), ANTUSD (Wang and Ku, 2016) and SCL-NMA (Kiritchenko and Mohammad, 2016) also provide real-valued sentiment intensity or strength scores like the valence scores....
[...]
9,776 citations
"Refining Word Embeddings for Sentim..." refers background or methods in this paper
...The above word embeddings were used by CNN (Kim, 2014) 4, DAN (Iyyer et al., 2015)5, , bi-directional LSTM (Bi-LSTM) (Tai et al., 2015)6 and Tree-LSTM (Looks et al., 2017)7 with default parameter values....
[...]
...To this end, several deep neural network classifiers that performed well on the Stanford Sentiment Treebank (SST) (Socher et al., 2013) are selected, including convolutional neural networks (CNN) (Kim, 2014), deep averaging network (DAN) (Iyyer et al., 2015) and long-short term memory (LSTM) (Tai et al., 2015; Looks et al., 2017)....
[...]
..., 2013) are selected, including convolutional neural networks (CNN) (Kim, 2014), deep averaging network (DAN) (Iyyer et al....
[...]
...For the pre-trained word embeddings, GloVe outperformed Word2vec for DAN, Bi-LSTM and Tree-LSTM, whereas Word2vec yielded better performance for CNN....
[...]
...Table 2 shows the average noise@10 for different 3 http://ir.hit.edu.cn/~dytang/ 4 https://github.com/yoonkim/CNN_sentence 5 https://github.com/miyyer/dan 6 https://github.com/stanfordnlp/treelstm 7 https://github.com/tensorflow/fold word embeddings....
[...]