scispace - formally typeset
F

Fabio Vesperini

Researcher at Marche Polytechnic University

Publications -  20
Citations -  672

Fabio Vesperini is an academic researcher from Marche Polytechnic University. The author has contributed to research in topics: Artificial neural network & Convolutional neural network. The author has an hindex of 9, co-authored 20 publications receiving 527 citations.

Papers
More filters
Journal ArticleDOI

Polyphonic Sound Event Detection by Using Capsule Neural Networks

TL;DR: Extensive evaluations carried out on three publicly available datasets are reported, showing how the CapsNet-based algorithm not only outperforms standard CNNs but also achieves the best results with respect to the state-of-the-art algorithms.
Journal ArticleDOI

Localizing speakers in multiple rooms by using Deep Neural Networks

TL;DR: It is shown how DNN-based algorithm significantly outperforms the state-of-the-art approaches evaluated on the DIRHA dataset, providing an average localization error expressed in terms of Root Mean Square Error (RMSE), equal to 324 mm and 367 mm for the Simulated and the Real subsets.
Proceedings ArticleDOI

Few-Shot Siamese Neural Networks Employing Audio Features for Human-Fall Detection

TL;DR: A first study on few-shot learning Siamese Neural Network applied to human falls detection by using audio signals is shown and is able to learn the differences between signals belonging to different classes of events.
Proceedings ArticleDOI

Deep neural networks for Multi-Room Voice Activity Detection: Advancements and comparative evaluation

TL;DR: This paper focuses on Voice Activity Detectors (VAD) for multi-room domestic scenarios based on deep neural network architectures and a comparative and extensive analysis is lead among four different neural networks (NN).
Proceedings ArticleDOI

A neural network approach for sound event detection in real life audio

TL;DR: This paper presents and compares two algorithms based on artificial neural networks (ANNs) for sound event detection in real life audio and presents results obtained with two different neural architectures, namely multi-layer perceptrons (MLPs) and recurrent neural networks.