Conditional Random Field Autoencoders for Unsupervised Structured Prediction
Citations
441 citations
346 citations
171 citations
Cites background from "Conditional Random Field Autoencode..."
...1 Likewise, given a monolingual corpus of source language S = {x(s)}Ss=1, it is natural to introduce a source autoencoder that aims at reconstructing 1Our definition of auotoencoders is inspired by Ammar et al. (2014)....
[...]
...Autoencoders and their variants have been widely used in unsupervised deep learning ((Vincent et al., 2010; Socher et al., 2011; Ammar et al., 2014), just to name a few)....
[...]
84 citations
Cites background from "Conditional Random Field Autoencode..."
...Though apparently similar to recent deep structured models such as neural-CRFs (Durrett and Klein, 2015; Ammar et al., 2014; Do et al., 2010), ours is different since we parsimoniously extract features that are necessary for precise and efficient knowledge expression, as opposed to neural-CRFs that learn as rich representations as possible for final prediction....
[...]
...Though apparently similar to recent deep structured models such as neural-CRFs (Durrett and Klein, 2015; Ammar et al., 2014; Do et al., 2010), ours is different since we parsimoniously extract features that are necessary for precise and efficient knowledge expression, as opposed to neural-CRFs that…...
[...]
78 citations
Cites background from "Conditional Random Field Autoencode..."
...similar in that they are both sequential variants of the standard autoencoder [23]....
[...]
References
21,126 citations
13,190 citations
"Conditional Random Field Autoencode..." refers methods in this paper
...Conditional random fields [24] are used to model structure in numerous problem domains, including natural language processing (NLP), computational biology, and computer vision....
[...]
7,244 citations