Examples of Markov chains


This page contains examples of Markov chains and Markov processes in action.
All examples are in the countable state space. For an overview of Markov chains in general state space, see Markov chains on a measurable state space.

Discrete-time

Board games played with dice

A game of snakes and ladders or any other game whose moves are determined entirely by dice is a Markov chain, indeed, an absorbing Markov chain. This is in contrast to card games such as blackjack, where the cards represent a 'memory' of the past moves. To see the difference, consider the probability for a certain event in the game. In the above-mentioned dice games, the only thing that matters is the current state of the board. The next state of the board depends on the current state, and the next roll of the dice. It doesn't depend on how things got to their current state. In a game such as blackjack, a player can gain an advantage by remembering which cards have already been shown, so the next state of the game is not independent of the past states.

Random walk Markov chains

A center-biased random walk

Consider a random walk on the number line where, at each step, the position may change by +1 or −1 with probabilities:
For example, if the constant, c, equals 1, the probabilities of a move to the left at positions x = −2,−1,0,1,2 are given by respectively. The random walk has a centering effect that weakens as c increases.
Since the probabilities depend only on the current position and not on any prior positions, this biased random walk satisfies the definition of a Markov chain.

Gambling

Suppose that you start with $10, and you wager $1 on an unending, fair, coin toss indefinitely, or until you lose all of your money. If represents the number of dollars you have after n tosses, with, then the sequence is a Markov process. If I know that you have $12 now, then it would be expected that with even odds, you will either have $11 or $13 after the next toss. This guess is not improved by the added knowledge that you started with $10, then went up to $11, down to $10, up to $11, and then to $12. The fact that the guess is not improved by the knowledge of earlier tosses showcases the Markov property, the memoryless property of a stochastic process.

A simple weather model

The probabilities of weather conditions, given the weather on the preceding day,
can be represented by a transition matrix:
The matrix P represents the weather model in which a sunny day is 90%
likely to be followed by another sunny day, and a rainy day is 50% likely to
be followed by another rainy day. The columns can be labelled "sunny" and
"rainy", and the rows can be labelled in the same order.
i j is the probability that, if a given day is of type i, it will be
followed by a day of type j.
Notice that the rows of P sum to 1: this is because P is a stochastic matrix.

Predicting the weather

The weather on day 0 is known to be sunny. This is represented by a vector in which the "sunny" entry is 100%, and the "rainy" entry is 0%:
The weather on day 1 can be predicted by:
Thus, there is a 90% chance that day 1 will also be sunny.
The weather on day 2 can be predicted in the same way:
or
General rules for day n are:

Steady state of the weather

In this example, predictions for the weather on more distant days are increasingly
inaccurate and tend towards a . This vector represents
the probabilities of sunny and rainy weather on all days, and is independent
of the initial weather.
The steady state vector is defined as:
but converges to a strictly positive vector only if P is a regular transition matrix.
Since the q is independent from initial conditions, it must be unchanged when transformed by P. This makes it an eigenvector, and means it can be derived from P. For the weather example:
and since they are a probability vector we know that
Solving this pair of simultaneous equations gives the steady state distribution:
In conclusion, in the long term, about 83.3% of days are sunny.

Stock market

A state diagram for a simple example is shown in the figure on the right, using a directed graph to picture the state transitions. The states represent whether a hypothetical stock market is exhibiting a bull market, bear market, or stagnant market trend during a given week. According to the figure, a bull week is followed by another bull week 90% of the time, a bear week 7.5% of the time, and a stagnant week the other 2.5% of the time. Labeling the state space the transition matrix for this example is
The distribution over states can be written as a stochastic row vector with the relation. So if at time the system is in state, then three time periods later, at time the distribution is
In particular, if at time the system is in state 2 , then at time the distribution is
Using the transition matrix it is possible to calculate, for example, the long-term fraction of weeks during which the market is stagnant, or the average number of weeks it will take to go from a stagnant to a bull market. Using the transition probabilities, the steady-state probabilities indicate that 62.5% of weeks will be in a bull market, 31.25% of weeks will be in a bear market and 6.25% of weeks will be stagnant, since:
A thorough development and many examples can be found in the on-line monograph Meyn & Tweedie 2005.
A finite-state machine can be used as a representation of a Markov chain. Assuming a sequence of independent and identically distributed input signals, if the machine is in state y at time n, then the probability that it moves to state x at time n + 1 depends only on the current state.

Continuous-time

A birth-death process

If one pops one hundred kernels of popcorn in an oven, each kernel popping at an independent exponentially-distributed time, then this would be a continuous-time Markov process. If denotes the number of kernels which have popped up to time t, the problem can be defined as finding the number of kernels that will pop in some later time. The only thing one needs to know is the number of kernels that have popped prior to the time "t". It is not necessary to know when they popped, so knowing for previous times "t" is not relevant.
The process described here is an approximation of a Poisson point processPoisson processes are also Markov processes.