E
Emanuele Principi
Researcher at Marche Polytechnic University
Publications - 97
Citations - 1343
Emanuele Principi is an academic researcher from Marche Polytechnic University. The author has contributed to research in topics: Artificial neural network & Voice activity detection. The author has an hindex of 17, co-authored 89 publications receiving 962 citations.
Papers
More filters
Journal ArticleDOI
Non-intrusive load monitoring by using active and reactive power in additive Factorial Hidden Markov Models
Roberto Bonfigli,Emanuele Principi,Marco Fagiani,Marco Severini,Stefano Squartini,Francesco Piazza +5 more
TL;DR: A NILM algorithm based on the joint use of active and reactive power in the Additive Factorial Hidden Markov Models framework is proposed, which outperforms AFAMAP, Hart’s algorithm, and Hart's with MAP respectively.
Journal ArticleDOI
Denoising autoencoders for Non-Intrusive Load Monitoring: Improvements and comparative evaluation
Roberto Bonfigli,Andrea Felicetti,Emanuele Principi,Marco Fagiani,Stefano Squartini,Francesco Piazza +5 more
TL;DR: A NILM algorithm based on the Deep Neural Networks is proposed, which outperforms the AFAMAP algorithm both in seen and unseen condition, and that it exhibits a significant robustness in presence of noise.
Journal ArticleDOI
Unsupervised electric motor fault detection by using deep autoencoders
TL;DR: An unsupervised method for diagnosing faults of electric motors by using a novelty detection approach based on deep autoencoders, and the results showed that all the autoencoder-based approaches outperform the OC-SVM algorithm.
Proceedings ArticleDOI
A neural network based algorithm for speaker localization in a multi-room environment
TL;DR: A Speaker Localization algorithm based on Neural Networks for multi-room domestic scenarios is proposed and outperforms the reference one, providing an average localization error, expressed in terms of RMSE, equal to 525 mm against 1465 mm.
Proceedings ArticleDOI
Acoustic novelty detection with adversarial autoencoders
TL;DR: The presented approach showed promising results on this task and it could be extended as a general training strategy for autoencoders if confirmed by additional experiments.