Bernoulli process
In probability and statistics, a Bernoulli process is a finite or infinite sequence of binary random variables, so it is a discrete-time stochastic process that takes only two values, canonically 0 and 1. The component Bernoulli variables Xi are identically distributed and independent. Prosaically, a Bernoulli process is a repeated coin flipping, possibly with an unfair coin. Every variable Xi in the sequence is associated with a Bernoulli trial or experiment. They all have the same Bernoulli distribution. Much of what can be said about the Bernoulli process can also be generalized to more than two outcomes ; this generalization is known as the Bernoulli scheme.
The problem of determining the process, given only a limited sample of Bernoulli trials, may be called the problem of checking whether a coin is fair.
Definition
A Bernoulli process is a finite or infinite sequence of independent random variables X1, X2, X3, ..., such that- for each i, the value of Xi is either 0 or 1;
- for all values of i, the probability p that Xi = 1 is the same.
Independence of the trials implies that the process is memoryless. Given that the probability p is known, past outcomes provide no information about future outcomes.
If the process is infinite, then from any point the future trials constitute a Bernoulli process identical to the whole process, the fresh-start property.
Interpretation
The two possible values of each Xi are often called "success" and "failure". Thus, when expressed as a number 0 or 1, the outcome may be called the number of successes on the ith "trial".Two other common interpretations of the values are true or false and yes or no. Under any interpretation of the two values, the individual variables Xi may be called Bernoulli trials with parameter p.
In many applications time passes between trials, as the index i increases. In effect, the trials X1, X2, ... Xi, ... happen at "points in time" 1, 2, ..., i, .... That passage of time and the associated notions of "past" and "future" are not necessary, however. Most generally, any Xi and Xj in the process are simply two from a set of random variables indexed by, the finite cases, or by, the infinite cases.
One experiment with only two possible outcomes, often referred to as "success" and "failure", usually encoded as 1 and 0, is used to be modeled as a Bernoulli distribution. Several random variables and probability distributions beside the Bernoullis may be derived from the Bernoulli process:
- The number of successes in the first n trials, which has a binomial distribution B
- The number of failures needed to get r successes, which has a negative binomial distribution NB
- The number of failures needed to get one success, which has a geometric distribution NB, a special case of the negative binomial distribution
Formal definition
The Bernoulli process can be formalized in the language of probability spaces as a random sequence of independent realisations of a random variable that can take values of heads or tails. The state space for an individual value is denoted bySpecifically, one considers the countably infinite direct product of copies of. It is common to examine either the one-sided set or the two-sided set. There is a natural topology on this space, called the product topology. The sets in this topology are finite sequences of coin flips, that is, finite-length strings of H and T, with the rest of sequence taken as "don't care". These sets of finite sequences are referred to as cylinder sets in the product topology. The set of all such strings form a sigma algebra, specifically, a Borel algebra. This algebra is then commonly written as where the elements of are the finite-length sequences of coin flips.
If the chances of flipping heads or tails are given by the probabilities, then one can define a natural measure on the product space, given by . In another word, if a discrete random variable X has a Bernoulli distribution with parameter p, where 0 ≤ p ≤ 1, and its probability mass function is given by
We denote this distribution by Ber.
Given a cylinder set, that is, a specific sequence of coin flip results at times, the probability of observing this particular sequence is given by
where k is the number of times that H appears in the sequence, and n−k is the number of times that T appears in the sequence. There are several different kinds of notations for the above; a common one is to write
where each is a binary-valued random variable with in Iverson bracket notation, meaning either if or if. This probability is commonly called the Bernoulli measure.
Note that the probability of any specific, infinitely long sequence of coin flips is exactly zero; this is because, for any. A probability equal to 1 implies that any given infinite sequence has measure zero. Nevertheless, one can still say that some classes of infinite sequences of coin flips are far more likely than others, this is given by the asymptotic equipartition property.
To conclude the formal definition, a Bernoulli process is then given by the probability triple, as defined above.
Law of large numbers, binomial distribution and central limit theorem
Let us assume the canonical process with represented by and represented by. The law of large numbers states that, on the average of the sequence, i.e.,, will approach the expected value almost certainly, that is, the events which do not satisfy this limit have zero probability. The expectation value of flipping heads, assumed to be represented by 1, is given by. In fact, one hasfor any given random variable out of the infinite sequence of Bernoulli trials that compose the Bernoulli process.
One is often interested in knowing how often one will observe H in a sequence of n coin flips. This is given by simply counting: Given n successive coin flips, that is, given the set of all possible strings of length n, the number N of such strings that contain k occurrences of H is given by the binomial coefficient
If the probability of flipping heads is given by p, then the total probability of seeing a string of length n with k heads is
where .
The probability measure thus defined is known as the Binomial distribution.
As we can see from the above formula that, if n=1, the Binomial distribution will turn into a Bernoulli distribution. So we can know that the Bernoulli distribution is exactly a special case of Binomial distribution when n equals to 1.
Of particular interest is the question of the value of for a sufficiently long sequences of coin flips, that is, for the limit. In this case, one may make use of Stirling's approximation to the factorial, and write
Inserting this into the expression for P, one obtains the Normal distribution; this is the content of the central limit theorem, and this is the simplest example thereof.
The combination of the law of large numbers, together with the central limit theorem, leads to an interesting and perhaps surprising result: the asymptotic equipartition property. Put informally, one notes that, yes, over many coin flips, one will observe H exactly p fraction of the time, and that this corresponds exactly with the peak of the Gaussian. The asymptotic equipartition property essentially states that this peak is infinitely sharp, with infinite fall-off on either side. That is, given the set of all possible infinitely long strings of H and T occurring in the Bernoulli process, this set is partitioned into two: those strings that occur with probability 1, and those that occur with probability 0. This partitioning is known as the Kolmogorov 0-1 law.
The size of this set is interesting, also, and can be explicitly determined: the logarithm of it is exactly the entropy of the Bernoulli process. Once again, consider the set of all strings of length n. The size of this set is. Of these, only a certain subset are likely; the size of this set is for. By using Stirling's approximation, putting it into the expression for P, solving for the location and width of the peak, and finally taking one finds that
This value is the Bernoulli entropy of a Bernoulli process. Here, H stands for entropy; do not confuse it with the same symbol H standing for heads.
John von Neumann posed a curious question about the Bernoulli process: is it ever possible that a given process is isomorphic to another, in the sense of the isomorphism of dynamical systems? The question long defied analysis, but was finally and completely answered with the Ornstein isomorphism theorem. This breakthrough resulted in the understanding that the Bernoulli process is unique and universal; in a certain sense, it is the single most random process possible; nothing is 'more' random than the Bernoulli process.
Dynamical system
The Bernoulli process can also be understood to be a dynamical system, specifically, a measure-preserving dynamical system. This arises because there is a natural translation symmetry on the product space given by the shift operatorThe measure is translation-invariant; that is, given any cylinder set, one has
and thus the Bernoulli measure is a Haar measure.
The shift operator should be understood to be an operator acting on the sigma algebra, so that one has
In this guise, the shift operator is known as the transfer operator or the Ruelle-Frobenius-Perron operator. It is interesting to consider the eigenfunctions of this operator, and how they differ when restricted to different subspaces of. When restricted to the standard topology of the real numbers, the eigenfunctions are curiously the Bernoulli polynomials! This coincidence of naming was presumably not known to Bernoulli.
Bernoulli sequence
The term Bernoulli sequence is often used informally to refer to a realization of a Bernoulli process.However, the term has an entirely different formal definition as given below.
Suppose a Bernoulli process formally defined as a single random variable. For every infinite sequence x of coin flips, there is a sequence of integers
called the Bernoulli sequence associated with the Bernoulli process. For example, if x represents a sequence of coin flips, then the associated Bernoulli sequence is the list of natural numbers or time-points for which the coin toss outcome is heads.
So defined, a Bernoulli sequence is also a random subset of the index set, the natural numbers.
Almost all Bernoulli sequences are ergodic sequences.
Randomness extraction
From any Bernoulli process one may derive a Bernoulli process with p = 1/2 by the von Neumann extractor, the earliest randomness extractor, which actually extracts uniform randomness.Basic von Neumann extractor
Represent the observed process as a sequence of zeroes and ones, or bits, and group that input stream in non-overlapping pairs of successive bits, such as.... Then for each pair,- if the bits are equal, discard;
- if the bits are not equal, output the first bit.
For example, an input stream of eight bits 10011011 would by grouped into pairs as '. Then, according to the table above, these pairs are translated into the output of the procedure:
'.
In the output stream 0 and 1 are equally likely, as 10 and 01 are equally likely in the original, both having probability p = p. This extraction of uniform randomness does not require the input trials to be independent, only uncorrelated. More generally, it works for any exchangeable sequence of bits: all sequences that are finite rearrangements are equally likely.
The von Neumann extractor uses two input bits to produce either zero or one output bits, so the output is shorter than the input by a factor of at least 2. On average the computation discards proportion p2 + 2 of the input pairs, which is near one when p is near zero or one, and is minimized at 1/4 when p = 1/2 for the original process.
Von Neumann main operation pseudocode:
if
Iterated von Neumann extractor
This decrease in efficiency, or waste of randomness present in the input stream, can be mitigated by iterating the algorithm over the input data. This way the output can be made to be "arbitrarily close to the entropy bound".The iterated version of the von Neumann algorithm, also known as Advanced Multi-Level Strategy, was introduced by Yuval Peres in 1992. It works recursively, recycling "wasted randomness" from two sources: the sequence of discard/non-discard, and the values of discarded pairs. Intuitively, it relies on the fact that, given the sequence already generated, both of those sources are still exchangeable sequences of bits, and thus eligible for another round of extraction. While such generation of additional sequences can be iterated infinitely to extract all available entropy, an infinite amount of computational resources is required, therefore the number of iterations is typically fixed to a low value – this value either fixed in advance, or calculated at runtime.
More concretely, on an input sequence, the algorithm consumes the input bits in pairs, generating output together with two new sequences:
Then the algorithm is applied recursively to each of the two new sequences, until the input is empty.
Example: The input stream from above, 10011011, is processed this way:
From the step of 1 on, the input becomes the new sequence1 of the last step to move on in this process. The output is therefore , so that from the eight bits of input five bits of output were generated, as opposed to three bits through the basic algorithm above. The constant output of exactly 2 bits per round also allows for constant-time implementations which are resistant to timing attacks.
Von Neumann–Peres main operation pseudocode:
if else
Another tweak was presented in 2016, based on the observation that the Sequence2 channel doesn't provide much throughput, and a hardware implementation with a finite number of levels can benefit from discarding it earlier in exchange for processing more levels of Sequence1.