scispace - formally typeset
Search or ask a question
Journal ArticleDOI

Information, irreversibility and statistical equilibrium

01 Apr 1976-Reports on Mathematical Physics (Pergamon)-Vol. 9, Iss: 2, pp 269-272
TL;DR: In this paper, the role of information and sufficiency in the coarse-grained interpretation of irreversibility and statistical equilibrium is discussed, and the importance of information in this context is discussed.
About: This article is published in Reports on Mathematical Physics.The article was published on 1976-04-01. It has received 1 citations till now. The article focuses on the topics: Interpretation (model theory).
Citations
More filters
Journal ArticleDOI
TL;DR: In this article, the role of sufficiency in the macroscopic description of a system from the measure-preserving automorphism of phase space is discussed, where the authors deal with the problem of quantifying the number of elements in a system.

2 citations

References
More filters
Journal ArticleDOI
E. T. Jaynes1
TL;DR: In this article, the authors consider statistical mechanics as a form of statistical inference rather than as a physical theory, and show that the usual computational rules, starting with the determination of the partition function, are an immediate consequence of the maximum-entropy principle.
Abstract: Information theory provides a constructive criterion for setting up probability distributions on the basis of partial knowledge, and leads to a type of statistical inference which is called the maximum-entropy estimate. It is the least biased estimate possible on the given information; i.e., it is maximally noncommittal with regard to missing information. If one considers statistical mechanics as a form of statistical inference rather than as a physical theory, it is found that the usual computational rules, starting with the determination of the partition function, are an immediate consequence of the maximum-entropy principle. In the resulting "subjective statistical mechanics," the usual rules are thus justified independently of any physical argument, and in particular independently of experimental verification; whether or not the results agree with experiment, they still represent the best estimates that could have been made on the basis of the information available.It is concluded that statistical mechanics need not be regarded as a physical theory dependent for its validity on the truth of additional assumptions not contained in the laws of mechanics (such as ergodicity, metric transitivity, equal a priori probabilities, etc.). Furthermore, it is possible to maintain a sharp distinction between its physical and statistical aspects. The former consists only of the correct enumeration of the states of a system and their properties; the latter is a straightforward example of statistical inference.

12,099 citations

Book
01 Jan 1959

7,235 citations

Book
01 Jan 1963
TL;DR: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers.
Abstract: These notes cover the basic definitions of discrete probability theory, and then present some results including Bayes' rule, inclusion-exclusion formula, Chebyshev's inequality, and the weak law of large numbers. 1 Sample spaces and events To treat probability rigorously, we define a sample space S whose elements are the possible outcomes of some process or experiment. For example, the sample space might be the outcomes of the roll of a die, or flips of a coin. To each element x of the sample space, we assign a probability, which will be a non-negative number between 0 and 1, which we will denote by p(x). We require that x∈S p(x) = 1, so the total probability of the elements of our sample space is 1. What this means intuitively is that when we perform our process, exactly one of the things in our sample space will happen. Example. The sample space could be S = {a, b, c}, and the probabilities could be p(a) = 1/2, p(b) = 1/3, p(c) = 1/6. If all elements of our sample space have equal probabilities, we call this the uniform probability distribution on our sample space. For example, if our sample space was the outcomes of a die roll, the sample space could be denoted S = {x 1 , x 2 ,. .. , x 6 }, where the event x i correspond to rolling i. The uniform distribution, in which every outcome x i has probability 1/6 describes the situation for a fair die. Similarly, if we consider tossing a fair coin, the outcomes would be H (heads) and T (tails), each with probability 1/2. In this situation we have the uniform probability distribution on the sample space S = {H, T }. We define an event A to be a subset of the sample space. For example, in the roll of a die, if the event A was rolling an even number, then A = {x 2 , x 4 , x 6 }. The probability of an event A, denoted by P(A), is the sum of the probabilities of the corresponding elements in the sample space. For rolling an even number, we have P(A) = p(x 2) + p(x 4) + p(x 6) = 1 2 Given an event A of our sample space, there is a complementary event which consists of all points in our sample space that are not …

6,236 citations

Book
29 Mar 1977

6,171 citations