scispace - formally typeset
R

Richard L. Lewis

Researcher at University of Michigan

Publications -  139
Citations -  8099

Richard L. Lewis is an academic researcher from University of Michigan. The author has contributed to research in topics: Reinforcement learning & Sentence. The author has an hindex of 38, co-authored 132 publications receiving 7317 citations. Previous affiliations of Richard L. Lewis include Carnegie Mellon University & Ohio State University.

Papers
More filters
Journal ArticleDOI

The Mind and Brain of Short-Term Memory

TL;DR: A conceptual model tracing the representation of a single item through a short-term memory task is described, describing the biological mechanisms that might support psychological processes on a moment-by-moment basis as an item is encoded, maintained over a delay with some forgetting, and ultimately retrieved.
Journal ArticleDOI

An activation-based model of sentence processing as skilled memory retrieval

TL;DR: The authors presented a detailed process theory of the moment-by-moment working-memory retrievals and associated control structure that subserve sentence comprehension, which is derived from the application of independently motivated principles of memory and cognitive skill to the specialized task of sentence parsing.
Posted Content

Action-Conditional Video Prediction using Deep Networks in Atari Games

TL;DR: In this article, the authors proposed and evaluated two deep neural network architectures that consist of encoding, action-conditional transformation, and decoding layers based on convolutional neural networks and recurrent neural networks.
Journal ArticleDOI

Computational principles of working memory in sentence comprehension.

TL;DR: An emerging theoretical framework for a working memory system that incorporates several independently motivated principles of memory: a sharply limited attentional focus, rapid retrieval of item information subject to interference from similar items, and activation decay (forgetting over time).
Journal ArticleDOI

Intrinsically Motivated Reinforcement Learning: An Evolutionary Perspective

TL;DR: A new optimal reward framework is defined that captures the pressure to design good primary reward functions that lead to evolutionary success across environments and shows that optimal primary reward signals may yield both emergent intrinsic and extrinsic motivation.