scispace - formally typeset
Search or ask a question
Author

Peter Gacs

Bio: Peter Gacs is an academic researcher from Boston University. The author has contributed to research in topics: Randomness & Probability distribution. The author has an hindex of 27, co-authored 70 publications receiving 3034 citations. Previous affiliations of Peter Gacs include Stanford University & Hungarian Academy of Sciences.


Papers
More filters
Journal ArticleDOI
Charles H. Bennett1, Peter Gacs, Ming Li, Paul M. B. Vitányi, Wojciech H. Zurek 
TL;DR: It is shown that the information distance is a universal cognitive similarity distance and investigated the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs, and the density properties of the discrete metric spaces induced by the information distances.
Abstract: While Kolmogorov (1965) complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures. We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations. It turns out that these definitions are equivalent up to an additive logarithmic term. We show that the information distance is a universal cognitive similarity distance. We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the Slepian-Wolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances. A related distance measures the amount of nonreversibility of a computation. Using the physical theory of reversible computation, we give an appropriate (universal, antisymmetric, and transitive) measure of the thermodynamic work required to transform one object in another object by the most efficient process. Information distance between individual objects is needed in pattern recognition where one wants to express effective notions of "pattern similarity" or "cognitive similarity" between individual objects and in thermodynamics of computation where one wants to analyze the energy dissipation of a computation from a particular input to a particular output.

479 citations

Journal ArticleDOI
TL;DR: In this article, it was shown that if the conditional probability of a given pair of random variables being in a positive constant space is larger than a constant, then the probability of the pair of variables in a negative space is smaller than the positive constant.
Abstract: For a pair of random variables, $(X, Y)$ on the space $\mathscr{X} \times \mathscr{Y}$ and a positive constant, $\lambda$, it is an important problem of information theory to look for subsets $\mathscr{A}$ of $\mathscr{X}$ and $\mathscr{B}$ of $\mathscr{Y}$ such that the conditional probability of $Y$ being in $\mathscr{B}$ supposed $X$ is in $\mathscr{A}$ is larger than $\lambda$. In many typical situations in order to satisfy this condition, $\mathscr{B}$ must be chosen much larger than $\mathscr{A}$. We shall deal with the most frequently investigated case when $X = (X_1,\cdots, X_n), Y = (Y_1,\cdots, Y_n)$ and $(X_i, Y_i)$ are independent, identically distributed pairs of random variables with a finite range. Suppose that the distribution of $(X, Y)$ is positive for all pairs of values $(x, y)$. We show that if $\mathscr{A}$ and $\mathscr{B}$ satisfy the above condition with a constant $\lambda$ and the probability of $\mathscr{B}$ goes to 0, then the probability of $\mathscr{A}$ goes even faster to 0. Generalizations and some exact estimates of the exponents of probabilities are given. Our methods reveal an interesting connection with a so-called hypercontraction phenomenon in theoretical physics.

197 citations

Posted Content
Charles H. Bennett1, Peter Gacs, Ming Li, Paul M. B. Vitányi, Wojciech H. Zurek 
TL;DR: In this paper, a universal information metric based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations was proposed. But the information distance between two individual objects, for example, two pictures, is not an absolute measure of information content in an individual finite object.
Abstract: While Kolmogorov complexity is the accepted absolute measure of information content in an individual finite object, a similarly absolute notion is needed for the information distance between two individual objects, for example, two pictures We give several natural definitions of a universal information metric, based on length of shortest programs for either ordinary computations or reversible (dissipationless) computations It turns out that these definitions are equivalent up to an additive logarithmic term We show that the information distance is a universal cognitive similarity distance We investigate the maximal correlation of the shortest programs involved, the maximal uncorrelation of programs (a generalization of the Slepian-Wolf theorem of classical information theory), and the density properties of the discrete metric spaces induced by the information distances A related distance measures the amount of nonreversibility of a computation Using the physical theory of reversible computation, we give an appropriate (universal, anti-symmetric, and transitive) measure of the thermodynamic work required to transform one object in another object by the most efficient process Information distance between individual objects is needed in pattern recognition where one wants to express effective notions of "pattern similarity" or "cognitive similarity" between individual objects and in thermodynamics of computation where one wants to analyse the energy dissipation of a computation from a particular input to a particular output

192 citations

Journal ArticleDOI
Peter Gacs1
TL;DR: A one-dimensional array of cellular automata on which arbitrarily large computations can be implemented reliably, even though each automaton at each step makes an error with some constant probability, leads to the refutation of the positive probability conjecture.

161 citations


Cited by
More filters
Book
01 Jan 1994
TL;DR: In this paper, the authors present a brief history of LMIs in control theory and discuss some of the standard problems involved in LMIs, such as linear matrix inequalities, linear differential inequalities, and matrix problems with analytic solutions.
Abstract: Preface 1. Introduction Overview A Brief History of LMIs in Control Theory Notes on the Style of the Book Origin of the Book 2. Some Standard Problems Involving LMIs. Linear Matrix Inequalities Some Standard Problems Ellipsoid Algorithm Interior-Point Methods Strict and Nonstrict LMIs Miscellaneous Results on Matrix Inequalities Some LMI Problems with Analytic Solutions 3. Some Matrix Problems. Minimizing Condition Number by Scaling Minimizing Condition Number of a Positive-Definite Matrix Minimizing Norm by Scaling Rescaling a Matrix Positive-Definite Matrix Completion Problems Quadratic Approximation of a Polytopic Norm Ellipsoidal Approximation 4. Linear Differential Inclusions. Differential Inclusions Some Specific LDIs Nonlinear System Analysis via LDIs 5. Analysis of LDIs: State Properties. Quadratic Stability Invariant Ellipsoids 6. Analysis of LDIs: Input/Output Properties. Input-to-State Properties State-to-Output Properties Input-to-Output Properties 7. State-Feedback Synthesis for LDIs. Static State-Feedback Controllers State Properties Input-to-State Properties State-to-Output Properties Input-to-Output Properties Observer-Based Controllers for Nonlinear Systems 8. Lure and Multiplier Methods. Analysis of Lure Systems Integral Quadratic Constraints Multipliers for Systems with Unknown Parameters 9. Systems with Multiplicative Noise. Analysis of Systems with Multiplicative Noise State-Feedback Synthesis 10. Miscellaneous Problems. Optimization over an Affine Family of Linear Systems Analysis of Systems with LTI Perturbations Positive Orthant Stabilizability Linear Systems with Delays Interpolation Problems The Inverse Problem of Optimal Control System Realization Problems Multi-Criterion LQG Nonconvex Multi-Criterion Quadratic Problems Notation List of Acronyms Bibliography Index.

11,085 citations

Journal ArticleDOI
29 Jun 1997
TL;DR: It is proved that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit, and experimental results for binary-symmetric channels and Gaussian channels demonstrate that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved.
Abstract: We study two families of error-correcting codes defined in terms of very sparse matrices "MN" (MacKay-Neal (1995)) codes are recently invented, and "Gallager codes" were first investigated in 1962, but appear to have been largely forgotten, in spite of their excellent properties The decoding of both codes can be tackled with a practical sum-product algorithm We prove that these codes are "very good", in that sequences of codes exist which, when optimally decoded, achieve information rates up to the Shannon limit This result holds not only for the binary-symmetric channel but also for any channel with symmetric stationary ergodic noise We give experimental results for binary-symmetric channels and Gaussian channels demonstrating that practical performance substantially better than that of standard convolutional and concatenated codes can be achieved; indeed, the performance of Gallager codes is almost as close to the Shannon limit as that of turbo codes

3,842 citations

Book ChapterDOI
01 Jan 2011
TL;DR: Weakconvergence methods in metric spaces were studied in this article, with applications sufficient to show their power and utility, and the results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables.
Abstract: The author's preface gives an outline: "This book is about weakconvergence methods in metric spaces, with applications sufficient to show their power and utility. The Introduction motivates the definitions and indicates how the theory will yield solutions to problems arising outside it. Chapter 1 sets out the basic general theorems, which are then specialized in Chapter 2 to the space C[0, l ] of continuous functions on the unit interval and in Chapter 3 to the space D [0, 1 ] of functions with discontinuities of the first kind. The results of the first three chapters are used in Chapter 4 to derive a variety of limit theorems for dependent sequences of random variables. " The book develops and expands on Donsker's 1951 and 1952 papers on the invariance principle and empirical distributions. The basic random variables remain real-valued although, of course, measures on C[0, l ] and D[0, l ] are vitally used. Within this framework, there are various possibilities for a different and apparently better treatment of the material. More of the general theory of weak convergence of probabilities on separable metric spaces would be useful. Metrizability of the convergence is not brought up until late in the Appendix. The close relation of the Prokhorov metric and a metric for convergence in probability is (hence) not mentioned (see V. Strassen, Ann. Math. Statist. 36 (1965), 423-439; the reviewer, ibid. 39 (1968), 1563-1572). This relation would illuminate and organize such results as Theorems 4.1, 4.2 and 4.4 which give isolated, ad hoc connections between weak convergence of measures and nearness in probability. In the middle of p. 16, it should be noted that C*(S) consists of signed measures which need only be finitely additive if 5 is not compact. On p. 239, where the author twice speaks of separable subsets having nonmeasurable cardinal, he means "discrete" rather than "separable." Theorem 1.4 is Ulam's theorem that a Borel probability on a complete separable metric space is tight. Theorem 1 of Appendix 3 weakens completeness to topological completeness. After mentioning that probabilities on the rationals are tight, the author says it is an

3,554 citations

01 Jan 2019
TL;DR: The book presents a thorough treatment of the central ideas and their applications of Kolmogorov complexity with a wide range of illustrative applications, and will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics.
Abstract: The book is outstanding and admirable in many respects. ... is necessary reading for all kinds of readers from undergraduate students to top authorities in the field. Journal of Symbolic Logic Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity. The book presents a thorough treatment of the subject with a wide range of illustrative applications. Such applications include the randomness of finite objects or infinite sequences, Martin-Loef tests for randomness, information theory, computational learning theory, the complexity of algorithms, and the thermodynamics of computing. It will be ideal for advanced undergraduate students, graduate students, and researchers in computer science, mathematics, cognitive sciences, philosophy, artificial intelligence, statistics, and physics. The book is self-contained in that it contains the basic requirements from mathematics and computer science. Included are also numerous problem sets, comments, source references, and hints to solutions of problems. New topics in this edition include Omega numbers, KolmogorovLoveland randomness, universal learning, communication complexity, Kolmogorov's random graphs, time-limited universal distribution, Shannon information and others.

3,361 citations

Book
16 Jan 2012
TL;DR: In this article, a comprehensive treatment of network information theory and its applications is provided, which provides the first unified coverage of both classical and recent results, including successive cancellation and superposition coding, MIMO wireless communication, network coding and cooperative relaying.
Abstract: This comprehensive treatment of network information theory and its applications provides the first unified coverage of both classical and recent results. With an approach that balances the introduction of new models and new coding techniques, readers are guided through Shannon's point-to-point information theory, single-hop networks, multihop networks, and extensions to distributed computing, secrecy, wireless communication, and networking. Elementary mathematical tools and techniques are used throughout, requiring only basic knowledge of probability, whilst unified proofs of coding theorems are based on a few simple lemmas, making the text accessible to newcomers. Key topics covered include successive cancellation and superposition coding, MIMO wireless communication, network coding, and cooperative relaying. Also covered are feedback and interactive communication, capacity approximations and scaling laws, and asynchronous and random access channels. This book is ideal for use in the classroom, for self-study, and as a reference for researchers and engineers in industry and academia.

2,442 citations