scispace - formally typeset
Topic

Generative model

About: Generative model is a(n) research topic. Over the lifetime, 7819 publication(s) have been published within this topic receiving 366606 citation(s).

...read more

Papers
  More

Open accessJournal ArticleDOI: 10.3156/JSOFT.29.5_177_2
Ian Goodfellow1, Jean Pouget-Abadie1, Mehdi Mirza1, Bing Xu1  +4 moreInstitutions (2)
08 Dec 2014-
Abstract: We propose a new framework for estimating generative models via an adversarial process, in which we simultaneously train two models: a generative model G that captures the data distribution, and a discriminative model D that estimates the probability that a sample came from the training data rather than G. The training procedure for G is to maximize the probability of D making a mistake. This framework corresponds to a minimax two-player game. In the space of arbitrary functions G and D, a unique solution exists, with G recovering the training data distribution and D equal to ½ everywhere. In the case where G and D are defined by multilayer perceptrons, the entire system can be trained with backpropagation. There is no need for any Markov chains or unrolled approximate inference networks during either training or generation of samples. Experiments demonstrate the potential of the framework through qualitative and quantitative evaluation of the generated samples.

...read more

Topics: Generative model (64%), Discriminative model (54%), Approximate inference (53%) ...read more

29,410 Citations


Open accessProceedings Article
03 Jan 2001-
Abstract: We propose a generative model for text and other collections of discrete data that generalizes or improves on several previous models including naive Bayes/unigram, mixture of unigrams [6], and Hof-mann's aspect model, also known as probabilistic latent semantic indexing (pLSI) [3]. In the context of text modeling, our model posits that each document is generated as a mixture of topics, where the continuous-valued mixture proportions are distributed as a latent Dirichlet random variable. Inference and learning are carried out efficiently via variational algorithms. We present empirical results on applications of this model to problems in text modeling, collaborative filtering, and text classification.

...read more

25,546 Citations


Open accessDissertation
Alex Krizhevsky1Institutions (1)
01 Jan 2009-
Abstract: In this work we describe how to train a multi-layer generative model of natural images. We use a dataset of millions of tiny colour images, described in the next section. This has been attempted by several groups but without success. The models on which we focus are RBMs (Restricted Boltzmann Machines) and DBNs (Deep Belief Networks). These models learn interesting-looking filters, which we show are more useful to a classifier than the raw pixels. We train the classifier on a labeled subset that we have collected and call the CIFAR-10 dataset.

...read more

Topics: Deep belief network (57%), Generative model (52%), Boltzmann machine (51%) ...read more

14,902 Citations


Journal ArticleDOI: 10.1162/NECO.2006.18.7.1527
01 Jul 2006-Neural Computation
Abstract: We show how to use "complementary priors" to eliminate the explaining-away effects that make inference difficult in densely connected belief nets that have many hidden layers. Using complementary priors, we derive a fast, greedy algorithm that can learn deep, directed belief networks one layer at a time, provided the top two layers form an undirected associative memory. The fast, greedy algorithm is used to initialize a slower learning procedure that fine-tunes the weights using a contrastive version of the wake-sleep algorithm. After fine-tuning, a network with three hidden layers forms a very good generative model of the joint distribution of handwritten digit images and their labels. This generative model gives better digit classification than the best discriminative learning algorithms. The low-dimensional manifolds on which the digits lie are modeled by long ravines in the free-energy landscape of the top-level associative memory, and it is easy to explore these ravines by using the directed connections to display what the associative memory has in mind.

...read more

Topics: Deep belief network (63%), Convolutional Deep Belief Networks (61%), Generative model (58%) ...read more

13,005 Citations


Open accessBook
Aapo Hyvärinen1, Juha Karhunen1, Erkki Oja1Institutions (1)
18 May 2001-
Abstract: In this chapter, we discuss a statistical generative model called independent component analysis. It is basically a proper probabilistic formulation of the ideas underpinning sparse coding. It shows how sparse coding can be interpreted as providing a Bayesian prior, and answers some questions which were not properly answered in the sparse coding framework.

...read more

Topics: Neural coding (54%), Generative model (54%), Prior probability (50%)

8,330 Citations


Performance
Metrics
No. of papers in the topic in previous years
YearPapers
202211
20211,158
20201,256
20191,118
2018784
2017506

Top Attributes

Show by:

Topic's top 5 most impactful authors

Song-Chun Zhu

28 papers, 781 citations

Karl J. Friston

26 papers, 1.4K citations

Yoshua Bengio

25 papers, 44.7K citations

Geoffrey E. Hinton

19 papers, 4.6K citations

Vittorio Murino

18 papers, 216 citations

Network Information
Related Topics (5)
Multi-task learning

7.2K papers, 305.1K citations

92% related
Feature learning

15.5K papers, 684.7K citations

92% related
Supervised learning

20.8K papers, 710.5K citations

91% related
Semi-supervised learning

12.1K papers, 611.2K citations

91% related
Deep learning

79.8K papers, 2.1M citations

89% related