scispace - formally typeset
Open AccessJournal ArticleDOI

Stochastic Complexity and Modeling

Jorma Rissanen
- 01 Sep 1986 - 
- Vol. 14, Iss: 3, pp 1080-1100
Reads0
Chats0
TLDR
In this article, the stochastic complexity of a string of data, relative to a class of probabilistic models, is defined to be the fewest number of binary digits with which the data can be encoded by taking advantage of the selected models.
Abstract
As a modification of the notion of algorithmic complexity, the stochastic complexity of a string of data, relative to a class of probabilistic models, is defined to be the fewest number of binary digits with which the data can be encoded by taking advantage of the selected models. The computation of the stochastic complexity produces a model, which may be taken to incorporate all the statistical information in the data that can be extracted with the chosen model class. This model, for example, allows for optimal prediction, and its parameters are optimized both in their values and their number. A fundamental theorem is proved which gives a lower bound for the code length and, therefore, for prediction errors as well. Finally, the notions of "prior information" and the "useful information" in the data are defined in a new way, and a related construct gives a universal test statistic for hypothesis testing.

read more

Content maybe subject to copyright    Report

Citations
More filters
Journal ArticleDOI

Deep learning in neural networks

TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Journal ArticleDOI

Original Contribution: Stacked generalization

David H. Wolpert
- 05 Feb 1992 - 
TL;DR: The conclusion is that for almost any real-world generalization problem one should use some version of stacked generalization to minimize the generalization error rate.
Book

Bayesian learning for neural networks

TL;DR: Bayesian Learning for Neural Networks shows that Bayesian methods allow complex neural network models to be used without fear of the "overfitting" that can occur with traditional neural network learning methods.
Book

Prediction, learning, and games

TL;DR: In this paper, the authors provide a comprehensive treatment of the problem of predicting individual sequences using expert advice, a general framework within which many related problems can be cast and discussed, such as repeated game playing, adaptive data compression, sequential investment in the stock market, sequential pattern analysis, and several other problems.
Journal ArticleDOI

Neural networks and the bias/variance dilemma

TL;DR: It is suggested that current-generation feedforward neural networks are largely inadequate for difficult problems in machine perception and machine learning, regardless of parallel-versus-serial hardware or other implementation issues.