scispace - formally typeset
W

Wang Ling

Researcher at Google

Publications -  77
Citations -  5814

Wang Ling is an academic researcher from Google. The author has contributed to research in topics: Machine translation & Language model. The author has an hindex of 27, co-authored 67 publications receiving 5225 citations. Previous affiliations of Wang Ling include INESC-ID & Carnegie Mellon University.

Papers
More filters
Proceedings ArticleDOI

Transition-Based Dependency Parsing with Stack Long Short-Term Memory

TL;DR: This work was sponsored in part by the U. S. Army Research Laboratory and the NSF CAREER grant IIS-1054319 and the European Commission.
Posted Content

Transition-Based Dependency Parsing with Stack Long Short-Term Memory

TL;DR: The authors propose a stack LSTM to learn representations of parser states in transition-based dependency parsers, which can be used to learn a parser's state, including the buffer of incoming words, the history of actions taken by the parser, and the complete contents of the stack of partially built tree fragments.
Proceedings ArticleDOI

Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation

TL;DR: A model for constructing vector representations of words by composing characters using bidirectional LSTMs that requires only a single vector per character type and a fixed set of parameters for the compositional model, which yields state- of-the-art results in language modeling and part-of-speech tagging.
Posted Content

Finding Function in Form: Compositional Character Models for Open Vocabulary Word Representation

Abstract: We introduce a model for constructing vector representations of words by composing characters using bidirectional LSTMs. Relative to traditional word representation models that have independent vectors for each word type, our model requires only a single vector per character type and a fixed set of parameters for the compositional model. Despite the compactness of this model and, more importantly, the arbitrary nature of the form-function relationship in language, our "composed" word representations yield state-of-the-art results in language modeling and part-of-speech tagging. Benefits over traditional baselines are particularly pronounced in morphologically rich languages (e.g., Turkish).
Proceedings ArticleDOI

Two/Too Simple Adaptations of Word2Vec for Syntax Problems

TL;DR: Two simple modifications to the models in the popular Word2Vec tool are presented, in order to generate embeddings more suited to tasks involving syntax.