scispace - formally typeset
Journal ArticleDOI

Neuronlike adaptive elements that can solve difficult learning control problems

TLDR
In this article, a system consisting of two neuron-like adaptive elements can solve a difficult learning control problem, where the task is to balance a pole that is hinged to a movable cart by applying forces to the cart base.
Abstract
It is shown how a system consisting of two neuronlike adaptive elements can solve a difficult learning control problem. The task is to balance a pole that is hinged to a movable cart by applying forces to the cart's base. It is argued that the learning problems faced by adaptive elements that are components of adaptive networks are at least as difficult as this version of the pole-balancing problem. The learning system consists of a single associative search element (ASE) and a single adaptive critic element (ACE). In the course of learning to balance the pole, the ASE constructs associations between input and output by searching under the influence of reinforcement feedback, and the ACE constructs a more informative evaluation function than reinforcement feedback alone can provide. The differences between this approach and other attempts to solve problems using neurolike elements are discussed, as is the relation of this work to classical and instrumental conditioning in animal learning studies and its possible implications for research in the neurosciences.

read more

Citations
More filters
Proceedings Article

Boosted Fitted Q-Iteration

TL;DR: This paper is about the study of B-FQI, an Approximated Value Iteration (AVI) algorithm that exploits a boosting procedure to estimate the action-value function in reinforcement learning problems, and compares its performance to the one of FQI in different domains and using different regression techniques.
Journal ArticleDOI

Brief A strategy for controlling nonlinear systems using a learning automaton

X. Zeng, +2 more
- 01 Oct 2000 - 
TL;DR: This paper presents an application of learning automaton (LA) for nonlinear system control in which the reinforcement scheme is based on the Pursuit Algorithm interacting with a nonstationary environment.
Journal ArticleDOI

A deep Q-learning portfolio management framework for the cryptocurrency market

TL;DR: A novel deep Q-learning portfolio management framework composed of a set of local agents that learn assets behaviours and a global agent that describes the global reward function that has proven to be a promising approach for dynamic portfolio optimization.
Posted Content

The Differentiable Cross-Entropy Method

TL;DR: A differentiable variant that enables CEM to differentiate the output of CEM with respect to the objective function's parameters is introduced and brings CEM inside of the end-to-end learning pipeline where this has otherwise been impossible.
References
More filters
Journal ArticleDOI

Receptive fields, binocular interaction and functional architecture in the cat's visual cortex

TL;DR: This method is used to examine receptive fields of a more complex type and to make additional observations on binocular interaction and this approach is necessary in order to understand the behaviour of individual cells, but it fails to deal with the problem of the relationship of one cell to its neighbours.
Journal ArticleDOI

A Theory of Cerebellar Cortex

TL;DR: A detailed theory of cerebellar cortex is proposed whose consequence is that the cerebellum learns to perform motor skills and two forms of input—output relation are described, both consistent with the cortical theory.
Journal ArticleDOI

Receptive fields and functional architecture in two nonstriate visual areas (18 and 19) of the cat.

TL;DR: To UNDERSTAND VISION in physiological terms represents a formidable problem for the biologist, and one approach is to stimulate the retina with patterns of light while recording from single cells or fibers at various points along the visual pathway.
Journal ArticleDOI

Toward a modern theory of adaptive networks: Expectation and prediction.

TL;DR: The adaptive element presented learns to increase its response rate in anticipation of increased stimulation, producing a conditioned response before the occurrence of the unconditioned stimulus, and is in strong agreement with the behavioral data regarding the effects of stimulus context.
Journal ArticleDOI

Steps toward Artificial Intelligence

TL;DR: The problems of heuristic programming can be divided into five main areas: Search, Pattern-Recognition, Learning, Planning, and Induction as discussed by the authors, and the most successful heuristic (problem-solving) programs constructed to date.