scispace - formally typeset
A

Adhiguna Kuncoro

Researcher at Google

Publications -  28
Citations -  2027

Adhiguna Kuncoro is an academic researcher from Google. The author has contributed to research in topics: Language model & Parsing. The author has an hindex of 15, co-authored 25 publications receiving 1751 citations. Previous affiliations of Adhiguna Kuncoro include Carnegie Mellon University & University of Oxford.

Papers
More filters
Posted Content

DyNet: The Dynamic Neural Network Toolkit

TL;DR: DyNet is a toolkit for implementing neural network models based on dynamic declaration of network structure that has an optimized C++ backend and lightweight graph representation and is designed to allow users to implement their models in a way that is idiomatic in their preferred programming language.
Proceedings ArticleDOI

Recurrent Neural Network Grammars

TL;DR: The authors presentada a la 2016 Conference of the North American Chapter of the Association for Computational Linguistics, celebrada a San Diego (CA, EUA) els dies 12 a 17 de juny 2016.
Proceedings ArticleDOI

What Do Recurrent Neural Network Grammars Learn About Syntax

TL;DR: By training grammars without nonterminal labels, it is found that phrasal representations depend minimally on nonterminals, providing support for the endocentricity hypothesis.
Proceedings ArticleDOI

LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better.

TL;DR: It is found that the mere presence of syntactic information does not improve accuracy, but when model architecture is determined by syntax, number agreement is improved: top-down construction outperforms left-corner and bottom-up variants in capturing non-local structural dependencies.
Posted Content

Recurrent Neural Network Grammars

TL;DR: The authors introduce recurrent neural network grammars, probabilistic models of sentences with explicit phrase structure, which allow application to both parsing and language modeling, and demonstrate that they provide better parsing in English than any single previously published supervised generative model and better language modeling than state-of-the-art sequential RNNs in English and Chinese.