scispace - formally typeset
M

Mitchell Gordon

Researcher at Stanford University

Publications -  29
Citations -  712

Mitchell Gordon is an academic researcher from Stanford University. The author has contributed to research in topics: Computer science & Crowds. The author has an hindex of 12, co-authored 27 publications receiving 482 citations. Previous affiliations of Mitchell Gordon include University of Rochester.

Papers
More filters
Proceedings ArticleDOI

Glance: rapidly coding behavioral video with the crowd

TL;DR: Glance's rapid responses to natural language queries, feedback regarding question ambiguity and anomalies in the data, and ability to build on prior context in followup queries allow users to have a conversation-like interaction with their data - opening up new possibilities for naturally exploring video data.
Proceedings ArticleDOI

WatchWriter: Tap and Gesture Typing on a Smartwatch Miniature Keyboard with Statistical Decoding

TL;DR: WatchWriter demonstrates that human motor control adaptability, coupled with modern statistical decoding and error correction technologies developed for smartphones, can enable a surprisingly effective typing performance despite the small watch size.
Proceedings ArticleDOI

The Disagreement Deconvolution: Bringing Machine Learning Performance Metrics In Line With Reality

TL;DR: In this paper, disagreement deconvolution takes in any multi-annotator (e.g., crowdsourced) dataset, disentangles stable opinions from noise by estimating intra-annotators consistency.
Proceedings Article

HYPE: A Benchmark for Human eYe Perceptual Evaluation of Generative Models

TL;DR: This work establishes a gold standard human benchmark for generative realism by constructing Human eYe Perceptual Evaluation (HYPE), a human benchmark that is grounded in psychophysics research in perception, reliable across different sets of randomly sampled outputs from a model, able to produce separable model performances, and efficient in cost and time.
Proceedings ArticleDOI

Jury Learning: Integrating Dissenting Voices into Machine Learning Models

TL;DR: A deep learning architecture that models every annotator in a dataset, samples from annotators’ models to populate the jury, then runs inference to classify enables juries that dynamically adapt their composition, explore counterfactuals, and visualize dissent.