F
Faustino Gomez
Researcher at Dalle Molle Institute for Artificial Intelligence Research
Publications - 72
Citations - 10267
Faustino Gomez is an academic researcher from Dalle Molle Institute for Artificial Intelligence Research. The author has contributed to research in topics: Artificial neural network & Neuroevolution. The author has an hindex of 32, co-authored 72 publications receiving 8299 citations. Previous affiliations of Faustino Gomez include University of Lugano & University of Texas at Austin.
Papers
More filters
Proceedings ArticleDOI
Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks
TL;DR: This paper presents a novel method for training RNNs to label unsegmented sequences directly, thereby solving both problems of sequence learning and post-processing.
Journal ArticleDOI
Incremental Evolution of Complex General Behavior
Faustino Gomez,Risto Mikkulainen +1 more
TL;DR: This article proposes an approach wherein complex general behavior is learned incrementally, by starting with simpler behavior and gradually making the task more challenging and general, which evolves more effective and more general behavior.
Proceedings Article
A Clockwork RNN
TL;DR: This paper introduces a simple, yet powerful modification to the simple RNN architecture, the Clockwork RNN (CW-RNN), in which the hidden layer is partitioned into separate modules, each processing inputs at its own temporal granularity, making computations only at its prescribed clock rate.
Journal Article
Accelerated Neural Evolution through Cooperatively Coevolved Synapses
TL;DR: This paper compares a neuroevolution method called Cooperative Synapse Neuroevolution (CoSyNE), that uses cooperative coevolution at the level of individual synaptic weights, to a broad range of reinforcement learning algorithms on very difficult versions of the pole balancing problem that involve large state spaces and hidden state.
Journal ArticleDOI
Training Recurrent Networks by Evolino
TL;DR: It is shown that Evolino-based LSTM can solve tasks that Echo State nets cannot and achieves higher accuracy in certain continuous function generation tasks than conventional gradient descent RNNs, including gradient-basedLSTM.