SAEA: Self-Attentive Heterogeneous Sequence Learning Model for Entity Alignment
Citations
64 citations
10 citations
5 citations
4 citations
References
24,012 citations
"SAEA: Self-Attentive Heterogeneous ..." refers methods in this paper
...To deal with the difficulty of having too many output vectors that need to be updated every epoch, we use negative sampling [11] to update a sample of them....
[...]
8,117 citations
7,072 citations
7,019 citations
"SAEA: Self-Attentive Heterogeneous ..." refers methods in this paper
...Inspired by [20], we adopt layer normalization and dropout strategy in the self-attention layer and feed-forward layer...
[...]
...Furthermore, unlike RNN-based models assuming that the next element in a relational path depends on the current input and hidden state which is inappropriate for paths in KGs, we adapt the original residual connection in Transformer to a special crossed residual module....
[...]
...Therefore, stimulated by the Transformer [20], a purely self-attention based sequence model achieving the start-of-the-art performance and efficiency, we seek to build a sequential alignment model based upon it....
[...]
...Towards this end, inspired by the new sequential model Transformer [20], which has achieved better performance than traditional recurrent models in machine translation tasks but have not been explored in KGs for entity alignment, we propose a brand-new Self-Attentive heterogeneous sequence learning model for Entity Alignment (SAEA)....
[...]
5,109 citations
"SAEA: Self-Attentive Heterogeneous ..." refers background in this paper
...TransE [3] is the most popular one in the KG embedding area which makes each relation triple (h, r, t) meet the requirement of h+r≈t in vector space....
[...]