T
Titouan Parcollet
Researcher at University of Avignon
Publications - 61
Citations - 1211
Titouan Parcollet is an academic researcher from University of Avignon. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 12, co-authored 47 publications receiving 590 citations.
Papers
More filters
Posted Content
Flower: A Friendly Federated Learning Research Framework
TL;DR: Flower is presented, a FL framework which is both agnostic towards heterogeneous client environments and also scales to a large number of clients, including mobile and embedded devices and describes the design goals and implementation considerations of Flower.
Proceedings ArticleDOI
The Pytorch-kaldi Speech Recognition Toolkit
TL;DR: The PyTorch-Kaldi project as discussed by the authors aims to bridge the gap between these popular toolkits, trying to inherit the efficiency of Kaldi and the flexibility of Pytorch.
Journal ArticleDOI
A survey of quaternion neural networks
TL;DR: This survey provides a review of past and recent research on quaternion neural networks and their applications in different domains and details methods, algorithms and applications for each quaternions-valued neural networks proposed.
Proceedings ArticleDOI
Quaternion Convolutional Neural Networks for End-to-End Automatic Speech Recognition.
Titouan Parcollet,Ying Zhang,Mohamed Morchid,Chiheb Trabelsi,Georges Linarès,Renato De Mori,Yoshua Bengio +6 more
TL;DR: In this article, a quaternion-valued convolutional neu-ral network (QCNN) was proposed for sequence-to-sequence mapping with the CTC model.
Proceedings Article
Quaternion Recurrent Neural Networks
Titouan Parcollet,Mirco Ravanelli,Mohamed Morchid,Georges Linarès,Chiheb Trabelsi,Renato De Mori,Yoshua Bengio +6 more
TL;DR: It is shown that both QRNN and QLSTM achieve better performances than RNN and LSTM in a realistic application of automatic speech recognition, and reduce by a maximum factor of 3.3x the number of free parameters needed, compared to real-valued RNNs and L STMs to reach better results.