A
Aaron R. Voelker
Researcher at University of Waterloo
Publications - 32
Citations - 1006
Aaron R. Voelker is an academic researcher from University of Waterloo. The author has contributed to research in topics: Spiking neural network & Artificial neural network. The author has an hindex of 10, co-authored 32 publications receiving 668 citations. Previous affiliations of Aaron R. Voelker include Google.
Papers
More filters
Journal ArticleDOI
Nengo: a Python tool for building large-scale functional brain models.
Trevor Bekolay,James Bergstra,Eric Hunsberger,Travis DeWolf,Terrence C. Stewart,Daniel Rasmussen,Xuan Choo,Aaron R. Voelker,Chris Eliasmith +8 more
TL;DR: Nengo 2.0 is described, which is implemented in Python and uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Journal ArticleDOI
Braindrop: A Mixed-Signal Neuromorphic Architecture With a Dynamical Systems-Based Programming Model
Alexander Neckar,Sam Fok,Ben Varkey Benjamin,Terrence C. Stewart,Nick N. Oza,Aaron R. Voelker,Chris Eliasmith,Rajit Manohar,Kwabena Boahen +8 more
TL;DR: Two innovations—sparse encoding through analog spatial convolution and weighted spike-rate summation though digital accumulative thinning—cut digital traffic drastically, reducing the energy Braindrop consumes per equivalent synaptic operation to 381 fJ for typical network configurations.
Proceedings Article
Legendre Memory Units: Continuous-Time Representation in Recurrent Neural Networks
TL;DR: Backpropagation through the ODE solver allows each layer to adapt its internal time-step, enabling the network to learn task-relevant time-scales and exceed state-of-the-art performance among RNNs on permuted sequential MNIST.
Journal ArticleDOI
Human-Inspired Neurorobotic System for Classifying Surface Textures by Touch
TL;DR: This work implements a neurorobotic texture classifier with a recurrent spiking neural network, using a novel semisupervised approach for classifying dynamic stimuli, and demonstrates that this approach significantly improves upon a baseline model that does not use the described feature extraction.
Journal ArticleDOI
A neural model of hierarchical reinforcement learning.
TL;DR: A novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain that divides the RL process into a hierarchy of actions at different levels of abstraction is developed and demonstrates improved performance as a result of its hierarchical ability.