J
Julian Jara-Ettinger
Researcher at Yale University
Publications - 69
Citations - 1526
Julian Jara-Ettinger is an academic researcher from Yale University. The author has contributed to research in topics: Medicine & Inference. The author has an hindex of 14, co-authored 60 publications receiving 992 citations. Previous affiliations of Julian Jara-Ettinger include Massachusetts Institute of Technology.
Papers
More filters
Journal ArticleDOI
Rational quantitative attribution of beliefs, desires and percepts in human mentalizing
TL;DR: In this article, a Bayesian theory of mind (BToM) model is proposed to infer an actor's beliefs, desires and percepts from how they move in the local spatial environment.
Journal ArticleDOI
The Naïve Utility Calculus: Computational Principles Underlying Commonsense Psychology
TL;DR: It is proposed that human social cognition is structured around a basic understanding of ourselves and others as intuitive utility maximizers: from a young age, humans implicitly assume that agents choose goals and actions to maximize the rewards they Expect to obtain relative to the costs they expect to incur.
Journal ArticleDOI
Color naming across languages reflects color use
Edward Gibson,Richard Futrell,Julian Jara-Ettinger,Kyle Mahowald,Leon Bergen,Sivalogeswaran Ratnasingam,Mitchell Gibson,Steven Piantadosi,Bevil R. Conway +8 more
TL;DR: It is suggested that the cross-linguistic similarity in color-naming efficiency reflects colors of universal usefulness and provides an account of a principle (color use) that governs how color categories come about.
Journal ArticleDOI
Children's understanding of the costs and rewards underlying rational action.
TL;DR: In this article, the expectation of rational action is instantiated by a naive utility calculus sensitive to both agent-constant and agent-specific aspects of costs and rewards associated with actions.
Journal ArticleDOI
Theory of mind as inverse reinforcement learning
TL;DR: Inverse reinforcement learning (IRL) as discussed by the authors can be used to predict other people's actions by simulating a RL model with the hypothesized beliefs and desires, while mental-state inference is achieved by inverting this model.