S
Sanjay Subramanian
Researcher at Allen Institute for Artificial Intelligence
Publications - 20
Citations - 708
Sanjay Subramanian is an academic researcher from Allen Institute for Artificial Intelligence. The author has contributed to research in topics: Coreference & Principle of compositionality. The author has an hindex of 7, co-authored 18 publications receiving 398 citations. Previous affiliations of Sanjay Subramanian include Tel Aviv University.
Papers
More filters
Proceedings ArticleDOI
Evaluating Models’ Local Decision Boundaries via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmov,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hannaneh Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A more rigorous annotation paradigm for NLP that helps to close systematic gaps in the test data, and recommends that the dataset authors manually perturb the test instances in small but meaningful ways that (typically) change the gold label, creating contrast sets.
Proceedings ArticleDOI
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
TL;DR: This work introduces AllenNLP Interpret, a flexible framework for interpreting NLP models, which provides interpretation primitives for anyAllenNLP model and task, a suite of built-in interpretation methods, and a library of front-end visualization components.
Posted Content
Evaluating NLP Models via Contrast Sets
Matt Gardner,Yoav Artzi,Victoria Basmova,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hanna Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: A new annotation paradigm for NLP is proposed that helps to close systematic gaps in the test data, and it is recommended that after a dataset is constructed, the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Posted Content
Evaluating Models' Local Decision Boundaries via Contrast Sets.
Matt Gardner,Yoav Artzi,Victoria Basmova,Jonathan Berant,Ben Bogin,Sihao Chen,Pradeep Dasigi,Dheeru Dua,Yanai Elazar,Ananth Gottumukkala,Nitish Gupta,Hanna Hajishirzi,Gabriel Ilharco,Daniel Khashabi,Kevin Lin,Jiangming Liu,Nelson F. Liu,Phoebe Mulcaire,Qiang Ning,Sameer Singh,Noah A. Smith,Sanjay Subramanian,Reut Tsarfaty,Eric Wallace,Ally Zhang,Ben Zhou +25 more
TL;DR: Contrast sets as mentioned in this paper is a new annotation paradigm for NLP that helps to close systematic gaps in the test data, where the dataset authors manually perturb the test instances in small but meaningful ways that change the gold label, creating contrast sets.
Proceedings ArticleDOI
ReCLIP: A Strong Zero-Shot Baseline for Referring Expression Comprehension
TL;DR: The first component of ReCLIP is a region-scoring method that isolates object proposals via cropping and blurring, and passes them to CLIP, but it is found that CLIP is largely incapable of performing spatial reasoning off-the-shelf.