R
Rodrigo Agís
Researcher at University of Granada
Publications - 21
Citations - 401
Rodrigo Agís is an academic researcher from University of Granada. The author has contributed to research in topics: Artificial neural network & Optical flow. The author has an hindex of 8, co-authored 21 publications receiving 388 citations.
Papers
More filters
Journal ArticleDOI
Event-driven simulation scheme for spiking neural networks using lookup tables to characterize neuronal dynamics
TL;DR: This work implements and evaluates critically an event-driven algorithm (ED-LUT) that uses precalculated look-up tables to characterize synaptic and neuronal dynamics, and introduces an improved two-stage event-queue algorithm, which allows the simulations to scale efficiently to highly connected networks with arbitrary propagation delays.
Journal ArticleDOI
Real-time computing platform for spiking neurons (RT-spike)
TL;DR: The overall goal is to investigate biologically realistic models for the real-time control of robots operating within closed action-perception loops, and so the performance of the system on simulating a model of the cerebellum where the emulation of the temporal dynamics of the synaptic integration process is important is evaluated.
Journal ArticleDOI
Superpipelined high-performance optical-flow computation architecture
TL;DR: This work describes a novel superpipelined, fully parallelized architecture for optical-flow processing, which is capable of processing up to 170 frames per second at a resolution of 800x600 pixels, and discusses the advantages of high-frame-rate processing.
Book ChapterDOI
FPGA Implementation of Multi-layer Perceptrons for Speech Recognition
TL;DR: This work presents different hardware implementations of a multi-layer perceptron for speech recognition using two different abstraction levels: register transfer level (VHDL) and higher algorithmic-like level (Handel-C).
Journal ArticleDOI
Hardware event-driven simulation engine for spiking neural networks
TL;DR: This work has designed a pipelined datapath, in order to compute several events in parallel avoiding idle computing resources, and describes a computing scheme that takes full advantage of the massive parallel processing resources available at FPGA devices.