scispace - formally typeset
Search or ask a question
Author

A. N. Kolmogorov

Bio: A. N. Kolmogorov is an academic researcher. The author has an hindex of 1, co-authored 1 publications receiving 2533 citations.

Papers
More filters
Journal ArticleDOI
TL;DR: In this article, three approaches to the quantitative definition of information are presented: information-based, information-aware and information-neutral approaches to quantifying information in the context of information retrieval.
Abstract: (1968). Three approaches to the quantitative definition of information. International Journal of Computer Mathematics: Vol. 2, No. 1-4, pp. 157-168.

2,661 citations


Cited by
More filters
Journal ArticleDOI
TL;DR: This historical survey compactly summarizes relevant work, much of it from the previous millennium, review deep supervised learning, unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.

14,635 citations

Book
01 Jan 2009
TL;DR: The motivations and principles regarding learning algorithms for deep architectures, in particular those exploiting as building blocks unsupervised learning of single-layer modelssuch as Restricted Boltzmann Machines, used to construct deeper models such as Deep Belief Networks are discussed.
Abstract: Can machine learning deliver AI? Theoretical results, inspiration from the brain and cognition, as well as machine learning experiments suggest that in order to learn the kind of complicated functions that can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one would need deep architectures. Deep architectures are composed of multiple levels of non-linear operations, such as in neural nets with many hidden layers, graphical models with many levels of latent variables, or in complicated propositional formulae re-using many sub-formulae. Each level of the architecture represents features at a different level of abstraction, defined as a composition of lower-level features. Searching the parameter space of deep architectures is a difficult task, but new algorithms have been discovered and a new sub-area has emerged in the machine learning community since 2006, following these discoveries. Learning algorithms such as those for Deep Belief Networks and other related unsupervised learning algorithms have recently been proposed to train deep architectures, yielding exciting results and beating the state-of-the-art in certain areas. Learning Deep Architectures for AI discusses the motivations for and principles of learning algorithms for deep architectures. By analyzing and comparing recent results with different learning algorithms for deep architectures, explanations for their success are proposed and discussed, highlighting challenges and suggesting avenues for future explorations in this area.

7,767 citations

Journal ArticleDOI
Jorma Rissanen1
TL;DR: The number of digits it takes to write down an observed sequence x1,...,xN of a time series depends on the model with its parameters that one assumes to have generated the observed data.

6,254 citations

01 Jan 2019
TL;DR: The book presents a thorough treatment of the central ideas and their applications of Kolmogorov complexity with a wide range of illustrative applications, and will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics.
Abstract: The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.

3,361 citations

Book
01 Jan 2000
TL;DR: Economists and workers in the financial world will find useful the presentation of empirical analysis methods and well-formulated theoretical tools that might help describe systems composed of a huge number of interacting subsystems.
Abstract: This book concerns the use of concepts from statistical physics in the description of financial systems. The authors illustrate the scaling concepts used in probability theory, critical phenomena, and fully developed turbulent fluids. These concepts are then applied to financial time series. The authors also present a stochastic model that displays several of the statistical properties observed in empirical data. Statistical physics concepts such as stochastic dynamics, short- and long-range correlations, self-similarity and scaling permit an understanding of the global behaviour of economic systems without first having to work out a detailed microscopic description of the system. Physicists will find the application of statistical physics concepts to economic systems interesting. Economists and workers in the financial world will find useful the presentation of empirical analysis methods and well-formulated theoretical tools that might help describe systems composed of a huge number of interacting subsystems.

2,826 citations