scispace - formally typeset
Search or ask a question

Showing papers by "Ian Goodfellow published in 2011"


02 Jul 2011
TL;DR: This paper describes different kinds of layers the authors trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge, and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples.
Abstract: Learning good representations from a large set of unlabeled data is a particularly challenging task. Recent work (see Bengio (2009) for a review) shows that training deep architectures is a good way to extract such representations, by extracting and disentangling gradually higher-level factors of variation characterizing the input distribution. In this paper, we describe different kinds of layers we trained for learning representations in the setting of the Unsupervised and Transfer Learning Challenge. The strategy of our team won the final phase of the challenge. It combined and stacked different one-layer unsupervised learning algorithms, adapted to each of the five datasets of the competition. This paper describes that strategy and the particular one-layer learning algorithms feeding a simple linear classifier with a tiny number of labeled training samples (1 to 64 per class).

67 citations