scispace - formally typeset
G

Guillaume Lajoie

Researcher at Université de Montréal

Publications -  83
Citations -  730

Guillaume Lajoie is an academic researcher from Université de Montréal. The author has contributed to research in topics: Computer science & Artificial neural network. The author has an hindex of 12, co-authored 50 publications receiving 380 citations. Previous affiliations of Guillaume Lajoie include Canadian Institute for Advanced Research & McGill University.

Papers
More filters
Posted Content

Gradient Starvation: A Learning Proclivity in Neural Networks

TL;DR: This work provides a theoretical explanation for the emergence of feature imbalance in neural networks and develops guarantees for a novel regularization method aimed at decoupling feature learning dynamics, improving accuracy and robustness in cases hindered by gradient starvation.
Journal ArticleDOI

Learning function from structure in neuromorphic networks

TL;DR: The authors construct neuromorphic artificial neural networks endowed with biological connection patterns derived from diffusion-weighted imaging and train these neuromorphic networks to learn a memory-encoding task, revealing an interaction between network structure and dynamics.
Posted Content

Dimensionality compression and expansion in Deep Neural Networks.

TL;DR: This work contributes by shedding light on the success of deep neural networks in disentangling data in high-dimensional space while achieving good generalization, and invites new learning strategies focused on optimizing measurable geometric properties of learned representations, beginning with their intrinsic dimensionality.
Journal ArticleDOI

Chaos and reliability in balanced spiking networks with temporal drive.

TL;DR: It is shown that even in the presence of chaos, intermittent periods of highly reliable spiking often coexist with unreliable activity, and the local dynamical mechanisms involved in this intermittent reliability are elucidated.
Proceedings Article

Non-normal Recurrent Neural Network (nnRNN): learning long time dependencies while improving expressivity with transient dynamics

TL;DR: In this paper, the authors propose a connectivity structure based on the Schur decomposition, which allows to parametrize matrices with unit-norm eigenspectra without orthogonality constraints on eigenbases.