scispace - formally typeset
M

Matthew Botvinick

Researcher at University College London

Publications -  249
Citations -  57443

Matthew Botvinick is an academic researcher from University College London. The author has contributed to research in topics: Reinforcement learning & Computer science. The author has an hindex of 77, co-authored 224 publications receiving 48206 citations. Previous affiliations of Matthew Botvinick include Princeton University & University of Pennsylvania.

Papers
More filters
Posted ContentDOI

Neural evidence for the successor representation in choice evaluation

TL;DR: In this paper, the authors used fMRI to measure predictive representations in a setting where the Successor Representation implies specific errors in multi-step expectancies and corresponding behavioral errors.
Proceedings Article

MEMO: A Deep Network for Flexible Combination of Episodic Memories

TL;DR: A novel architecture, MEMO, endowed with the capacity to reason over longer distances is developed with the addition of two novel components, which introduces a separation between memories/facts stored in external memory and the items that comprise these facts in External memory.
Posted Content

Rapid Task-Solving in Novel Environments

TL;DR: A recursive implicit planning module is developed that operates over episodic memories, and it is shown that the resulting deep-RL agent is able to explore and plan in novel environments, outperforming the nearest baseline by factors of 2-3 across the two domains.
Journal ArticleDOI

Empirical and computational support for context-dependent representations of serial order: reply to Bowers, Damian, and Davis (2009).

TL;DR: The authors reply here, addressing both Bowers et al.'s criticisms of the Botvinick and Plaut model and the former's assessment of parallel distributed processing models in general.
Proceedings ArticleDOI

Leveraging Preposition Ambiguity to Assess Compositional Distributional Models of Semantics

TL;DR: A new method is presented for assessing the degree to which CDSM capture semantic interactions that dissociates the influences of lexical and compositional information and shows that neural language input vectors are consistently superior to co-occurrence based vectors and vector addition matches and is in many cases superior to purpose-built paramaterized models.