scispace - formally typeset
A

Angelo Garofalo

Researcher at University of Bologna

Publications -  31
Citations -  413

Angelo Garofalo is an academic researcher from University of Bologna. The author has contributed to research in topics: Computer science & RISC-V. The author has an hindex of 6, co-authored 20 publications receiving 175 citations.

Papers
More filters
Journal ArticleDOI

PULP-NN: Accelerating Quantized Neural Networks on Parallel Ultra-Low-Power RISC-V Processors

TL;DR: PULP-NN as mentioned in this paper is an optimized computing library for a parallel ultra-low-power tightly coupled cluster of RISC-V processors, targeting byte and sub-byte data types, down to INT-1.
Journal ArticleDOI

DORY: Automatic End-to-End Deployment of Real-World DNNs on Low-Cost IoT MCUs

TL;DR: This work proposes DORY (Deployment Oriented to memoRY) – an automatic tool to deploy DNNs on low cost MCUs with typically less than 1MB of on-chip SRAM memory and releases all the developments – the DORY framework, the optimized backend kernels, and the related heuristics – as open-source software.
Journal ArticleDOI

PULP-NN: accelerating quantized neural networks on parallel ultra-low-power RISC-V processors

TL;DR: The key innovation in PULP-NN is a set of kernels for quantized neural network inference, targeting byte and sub-byte data types, down to INT-1, tuned for the recent trend toward aggressive quantization in deep Neural network inference.
Proceedings ArticleDOI

XpulpNN: accelerating quantized neural networks on RISC-V processors through ISA extensions

TL;DR: A set of extensions to the RISC-V ISA, aimed at boosting the energy efficiency of low-bitwidth QNNs on low-power microcontroller-class cores, are presented.
Proceedings ArticleDOI

PULP-NN: A Computing Library for Quantized Neural Network inference at the edge on RISC-V Based Parallel Ultra Low Power Clusters

TL;DR: PULP-NN, a multicore computing library for a parallel ultra-low-power cluster of RISC-V based processors, consists of a set of kernels for Quantized Neural Network inference on edge devices, targeting byte and sub-byte data types, down to INT-1.