scispace - formally typeset
S

Sameer Singh

Researcher at University of California, Irvine

Publications -  196
Citations -  24675

Sameer Singh is an academic researcher from University of California, Irvine. The author has contributed to research in topics: Computer science & Inference. The author has an hindex of 45, co-authored 185 publications receiving 15043 citations. Previous affiliations of Sameer Singh include University of Washington & University of Massachusetts Amherst.

Papers
More filters
Proceedings Article

Towards Extracting Faithful and Descriptive Representations of Latent Variable Models

TL;DR: In this paper, the authors propose to extract an interpretable proxy model from a predictive latent variable model using a socalled pedagogical method, where they query their predictive model to obtain observations needed for learning a descriptive model.
Book ChapterDOI

From Reinforcement Learning to Deep Reinforcement Learning: An Overview.

TL;DR: This article provides a brief overview of reinforcementLearning, from its origins to current research trends, including deep reinforcement learning, with an emphasis on first principles.
Proceedings ArticleDOI

Obtaining Faithful Interpretations from Compositional Neural Networks

TL;DR: In this article, the intermediate outputs of neural module networks (NMNs) are evaluated on NLVR2 and DROP, two datasets which require composing multiple reasoning steps, and it is shown that the network structure does not provide a faithful explanation of model behaviour.
Journal ArticleDOI

Detecting conversation topics in primary care office visits from transcripts of patient-provider interactions.

TL;DR: Investigating the effectiveness of machine learning methods for automated annotation of medical topics in patient-provider dialog transcripts finds that incorporating sequential information across talk-turns improves the accuracy of topic prediction in patients' dialog by smoothing out noisy information from talk- turns.
Posted Content

Cutting Down on Prompts and Parameters: Simple Few-Shot Learning with Language Models.

TL;DR: This paper showed that finetuning LMs in the few-shot setting can considerably reduce the need for prompt engineering and achieved competitive accuracy to manually-tuned prompts across a wide range of tasks.