scispace - formally typeset
S

Siva Reddy

Researcher at University of Cambridge

Publications -  82
Citations -  4168

Siva Reddy is an academic researcher from University of Cambridge. The author has contributed to research in topics: Parsing & Natural language. The author has an hindex of 25, co-authored 82 publications receiving 3321 citations. Previous affiliations of Siva Reddy include McGill University & International Institute of Information Technology, Hyderabad.

Papers
More filters
Posted Content

Modelling Latent Translations for Cross-Lingual Transfer

TL;DR: This article proposed a new technique that integrates both steps of the traditional pipeline (translation and classification) into a single model, by treating the intermediate translations as a latent random variable, which can be fine-tuned with a variant of minimum risk training where the reward is the accuracy of the downstream task classifier.
Posted Content

Learning an Executable Neural Semantic Parser

TL;DR: In this paper, a transition-based approach is proposed to generate tree-structured logical forms with a transitionbased approach which combines a generic tree-generation algorithm with domain-general operations defined by the logical language.
Posted Content

Explicitly Modeling Syntax in Language Model improves Generalization.

TL;DR: A new syntax-aware language model: Syntactic Ordered Memory (SOM), which explicitly models the structure with a one-step look-ahead parser and maintains the conditional probability setting of the standard language model.
Posted Content

Back-Training excels Self-Training at Unsupervised Domain Adaptation of Question Generation and Passage Retrieval

TL;DR: This paper proposed a new domain adaptation method called ''back-training'' which outperforms self-training by a large margin: 9.3 BLEU-1 points on generation, and 7.9 accuracy points on top-1 retrieval.
Posted Content

Evaluating the Faithfulness of Importance Measures in NLP by Recursively Masking Allegedly Important Tokens and Retraining

TL;DR: In this article, the authors adapt and improve a recently proposed faithfulness benchmark from computer vision called ROAR (RemOve And Retrain), by recursively removing dataset redundancies, which otherwise interfere with ROAR.