scispace - formally typeset
S

Simran Arora

Researcher at Stanford University

Publications -  38
Citations -  300

Simran Arora is an academic researcher from Stanford University. The author has contributed to research in topics: Dark energy & Deceleration parameter. The author has an hindex of 5, co-authored 6 publications receiving 94 citations.

Papers
More filters
Posted Content

On the Opportunities and Risks of Foundation Models.

Rishi Bommasani, +113 more
- 16 Aug 2021 - 
TL;DR: The authors provides a thorough account of the opportunities and risks of foundation models, ranging from their capabilities (e.g., language, vision, robotics, reasoning, human interaction) and technical principles(e. g.g. model architectures, training procedures, data, systems, security, evaluation, theory) to their applications.
Proceedings ArticleDOI

Ask Me Anything: A simple strategy for prompting language models

TL;DR: This paper develops an understanding of the effective prompt formats and proposes to use weak supervision, a procedure for combining the noisy predictions, to produce the final predictions of the GPT-Neo-6B model.
Posted Content

Bootleg: Chasing the Tail with Self-Supervised Named Entity Disambiguation

TL;DR: This work defines core reasoning patterns for disambiguation, creates a learning procedure to encourage the self-supervised model to learn the patterns, and shows how to use weak supervision to enhance the signals in the training data.
Proceedings ArticleDOI

Contextual Embeddings: When Are They Worth It?

TL;DR: This article study the settings for which deep contextual embeddings (e.g., BERT) give large improvements in performance relative to classic pre-trained embedding, and an even simpler baseline (random word embedding) focusing on the impact of the training set size and the linguistic properties of the task.
Posted Content

Contextual Embeddings: When Are They Worth It?

TL;DR: Surprisingly, both of these simpler baselines can match contextual embeddings on industry-scale data, and often perform within 5 to 10% accuracy (absolute) on benchmark tasks.