C
Charles W. Anderson
Researcher at Colorado State University
Publications - 136
Citations - 8865
Charles W. Anderson is an academic researcher from Colorado State University. The author has contributed to research in topics: Artificial neural network & Reinforcement learning. The author has an hindex of 35, co-authored 129 publications receiving 8182 citations. Previous affiliations of Charles W. Anderson include University of Manitoba & University of Massachusetts Amherst.
Papers
More filters
Journal ArticleDOI
Restricted gradient-descent algorithm for value-function approximation in reinforcement learning
TL;DR: This work presents the restricted gradient-descent (RGD) algorithm, a training method for local radial-basis function networks specifically developed to be used in the context of reinforcement learning, and shows that the RGD algorithm consistently generates better value-function approximations than the standard gradient- Descent method, and that the latter is more susceptible to divergence.
Journal ArticleDOI
Indicator patterns of forced change learned by an artificial neural network
Elizabeth A. Barnes,Benjamin A. Toms,James W Hurrell,Imme Ebert-Uphoff,Imme Ebert-Uphoff,Charles W. Anderson,David G. Anderson +6 more
TL;DR: In this article, the authors train an artificial neural network (ANN) to identify the year of input maps of temperature and precipitation from forced climate model simulations and apply a neural network visualization technique (layerwise relevance propagation) to visualize the spatial patterns that lead the ANN to successfully predict the year.
Journal ArticleDOI
Context-Aware Energy Enhancements for Smart Mobile Devices
TL;DR: Experimental results show that up to 90% successful prediction is possible with neural networks and k-nearest neighbor algorithms, improving upon prediction strategies in prior work by approximately 50%.
Journal ArticleDOI
Robust Reinforcement Learning Control Using Integral Quadratic Constraints for Recurrent Neural Networks
Charles W. Anderson,Peter M. Young,Michael R. Buehner,James N. Knight,Keith Bush,Douglas C. Hittle +5 more
TL;DR: The stability of a control loop including a recurrent neural network (NN) is analyzed by replacing the nonlinear and time-varying components of the NN with IQCs on their gain and an algorithm is demonstrated for training the recurrent NN using reinforcement learning and guaranteeing stability while learning.
Proceedings ArticleDOI
Faster reinforcement learning after pretraining deep networks to predict state dynamics
TL;DR: It is demonstrated that learning a predictive model of state dynamics can result in a pretrained hidden layer structure that reduces the time needed to solve reinforcement learning problems.