scispace - formally typeset
P

Pierre-Amaury Grumiaux

Publications -  8
Citations -  64

Pierre-Amaury Grumiaux is an academic researcher. The author has contributed to research in topics: Audio signal processing & Recurrent neural network. The author has an hindex of 2, co-authored 7 publications receiving 10 citations.

Papers
More filters
Posted Content

A Survey of Sound Source Localization with Deep Learning Methods.

TL;DR: In this article, a survey on deep learning methods for single and multiple sound source localization is presented, where the authors provide an exhaustive topography of the neural-based localization literature in this context, organized according to several aspects.
Posted Content

Improved feature extraction for CRNN-based multiple sound source localization.

TL;DR: In this paper, the authors proposed several configurations with more convolutional layers and smaller pooling sizes in between, so that less information is lost across the layers, leading to a better feature extraction.
Posted Content

High-Resolution Speaker Counting In Reverberant Rooms Using CRNN With Ambisonics Features

TL;DR: This work addresses the speaker counting problem with a multichannel convolutional recurrent neural network which produces an estimation at a short-term frame resolution and can predict the number of speakers with good accuracy at frame resolution.
Proceedings ArticleDOI

High-Resolution Speaker Counting in Reverberant Rooms Using CRNN with Ambisonics Features

TL;DR: In this article, a multichannel convolutional recurrent neural network was used to predict the number of speakers with good accuracy at frame resolution, with simulated data including many different conditions in terms of source and microphone positions, reverberation, and noise.
Posted Content

SALADnet: Self-Attentive multisource Localization in the Ambisonics Domain.

TL;DR: In this article, a self-attention based neural network was proposed for robust multi-speaker localization from Ambisonics recordings, where the authors investigated the benefit of replacing the recurrent layers by selfattention encoders inherited from the Transformer architecture.