scispace - formally typeset
F

Francesco Galluppi

Researcher at Vision Institute

Publications -  56
Citations -  3163

Francesco Galluppi is an academic researcher from Vision Institute. The author has contributed to research in topics: Spiking neural network & Artificial neural network. The author has an hindex of 25, co-authored 56 publications receiving 2555 citations. Previous affiliations of Francesco Galluppi include Sapienza University of Rome & University of Manchester.

Papers
More filters
Journal ArticleDOI

The SpiNNaker Project

TL;DR: SpiNNaker as discussed by the authors is a massively parallel million-core computer whose interconnect architecture is inspired by the connectivity characteristics of the mammalian brain, and which is suited to the modeling of large-scale spiking neural networks in biological real time.
Journal ArticleDOI

HOTS: A Hierarchy of Event-Based Time-Surfaces for Pattern Recognition

TL;DR: The central concept is to use the rich temporal information provided by events to create contexts in the form of time-surfaces which represent the recent temporal activity within a local spatial neighborhood and it is demonstrated that this concept can robustly be used at all stages of an event-based hierarchical model.
Journal ArticleDOI

SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation

TL;DR: The design requirements for the very demanding target application, the SpiNNaker micro-architecture, are reviewed and the chips are fully operational and meet their power and performance requirements.
Proceedings ArticleDOI

Implementing spike-timing-dependent plasticity on SpiNNaker neuromorphic hardware

TL;DR: An associated deferred event-driven model is developed to enable the pre-sensitive scheme by deferring the STDP process until there are sufficient history spike timing records, along with the discussion on some issues related to efficient STDP implementation on a parallel neuromorphic hardware.
Journal ArticleDOI

Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms

TL;DR: It is demonstrated that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account.