scispace - formally typeset
Open AccessProceedings Article

Improved Gaussian Mixture Density Estimates Using Bayesian Penalty Terms and Network Averaging

Reads0
Chats0
TLDR
Two regularization methods are compared which can be used to improve the generalization capabilities of Gaussian mixture density estimates and Breiman's "bagging", which recently has been found to produce impressive results for classification networks.
Abstract
We compare two regularization methods which can be used to improve the generalization capabilities of Gaussian mixture density estimates. The first method uses a Bayesian prior on the parameter space. We derive EM (Expectation Maximization) update rules which maximize the a posterior parameter probability. In the second approach we apply ensemble averaging to density estimation. This includes Breiman's "bagging", which recently has been found to produce impressive results for classification networks.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Mixtures of probabilistic principal component analyzers

TL;DR: PCA is formulated within a maximum likelihood framework, based on a specific form of gaussian latent variable model, which leads to a well-defined mixture model for probabilistic principal component analyzers, whose parameters can be determined using an expectation-maximization algorithm.
Journal ArticleDOI

Constructive incremental learning from only local information

TL;DR: A constructive, incremental learning system for regression problems that models data by means of spatially localized linear models that can allocate resources as needed while dealing with the bias-variance dilemma in a principled way is introduced.
Journal ArticleDOI

SMEM Algorithm for Mixture Models

TL;DR: A split-and-merge expectation-maximization algorithm to overcome the local maxima problem in parameter estimation of finite mixture models and is applied to the training of gaussian mixtures and mixtures of factor analyzers and shows the practical usefulness by applying it to image compression and pattern recognition problems.
Journal ArticleDOI

Scalable Techniques from Nonparametric Statistics for Real Time Robot Learning

TL;DR: This paper introduces several LWL algorithms that have been tested successfully in real-time learning of complex robot tasks, and discusses two major classes of LWL, memory-based LWL and purely incremental LWL that does not need to remember any data explicitly.
Proceedings Article

A Deep and Tractable Density Estimator

TL;DR: This work introduces an efficient procedure to simultaneously train a NADE model for each possible ordering of the variables, by sharing parameters across all these models.
References
More filters
Journal ArticleDOI

Bagging predictors

Leo Breiman
TL;DR: Tests on real and simulated data sets using classification and regression trees and subset selection in linear regression show that bagging can give substantial gains in accuracy.
Journal ArticleDOI

Mixture densities, maximum likelihood, and the EM algorithm

Richard A. Redner, +1 more
- 01 Apr 1984 - 
TL;DR: This work discusses the formulation and theoretical and practical properties of the EM algorithm, a specialization to the mixture density context of a general algorithm used to approximate maximum-likelihood estimates for incomplete data problems.
Journal Article

WHO Technical Report.

TL;DR: The Feather River Coordinated Resource Management Group (FR-CRM) has been restoring channel/ meadow/ floodplain systems in the Feather River watershed since 1985 and recognized the possibility of a significant change in carbon stocks in these restored meadows and valleys.
Journal ArticleDOI

Active learning with statistical models

TL;DR: In this article, the optimal data selection techniques have been used with feed-forward neural networks and showed how the same principles may be used to select data for two alternative, statistically-based learning architectures: mixtures of Gaussians and locally weighted regression.