Two State Markov Chains

Two State Markov Chains

Markov chains use matrices to calculate future probabilities and can model complex systems.  In maths methods we will only ever study two state markov chains.

Markov Chains

Markov chains are used to observe a system at a discrete moment in time, n, denoted Xn. If the state of the system in the future always depends on the state in the present then it is called a Markov chain.

For a two state markov chain it is assumed:

  • Each step will result in one of two outcomes, for example it can either rain or not rain
  • The probability of each outcome depends only on the previous step, how your footy team plays this week will decide how they play the next week
  • The conditional probability is the same at each point in time, basically the probability one outcome will occur based on the previous outcome is constant

Transition Matrix

For an example lets pretend that you can either be passing or failing. We will call failing state 0 and passing state 1.


Let the probability that if you are now passing, by the end of the week you will be failing be  zero point zero five.


Therefore our other two possibilities will be the probability that you are failing in the present and the future and the probability that you are passing in the present and the future.



These can all be combined to form our transition matrix. The transition matrix is array of conditional possibilities whose size corresponds to its number of states. For our two states it is a two by two matrix.

T=\begin{bmatrix}Pr(X_{n+1}=0|X_{n}=0) &Pr(X_{n+1}=0|X_{n}=1) \\ Pr(X_{n+1}=1|X_{n}=1) & Pr(X_{n+1}=1|X_{n}=1)\end{bmatrix}=\begin{bmatrix}0.2 &0.05 \\ 0.8 & 0.95\end{bmatrix}

State vector

The other matrix which is important when studying Markov chains is the initial state vector, S0. This contain information about the markov chain at the initial point in time. The initial state vector that we deal with is a two by one matrix.

Now that we have the initial state and conditional possibilities we can multiply the two together to get future probabilities.

S_{1}=T \times S_{0}

S_{2}=T \times S_{1}=T^{2} \times S_{0}

S_{3}=T \times S_{2}=T^{3} \times S_{0}

Practice Exam Question

Part 2 of Question 3b in Exam 2 of 2013

markov Q3part1


The most straight forward way to answer this question is to use markov chains. We know we can use two state markov chains as each question only has two options and they are conditional on the previous option.

Let’s set our state zero to be a correct answer and state one to be incorrect.

S_{0}=\left|\begin{array}{cc}Pr(X=0)\\ Pr(X=1)\end{array}\right|=\left|\begin{array}{cc}0\\ 1\end{array}\right|

Then we need to define our transition matrix:

T=\begin{bmatrix}Pr(X_{n+1}=0|X_{n}=0) &Pr(X_{n+1}=0|X_{n}=1) \\ Pr(X_{n+1}=1|X_{n}=1) & Pr(X_{n+1}=1|X_{n}=1)\end{bmatrix}=\begin{bmatrix}\frac{3}{4} &\frac{1}{3} \\ \frac{1}{4} & \frac{2}{3}\end{bmatrix}

Now in order to find out the probability she answers the 25th question in a row wrong we need to multiply the transition matrix by the state matrix 24 times.

However this is very time consuming by hand so you will have to rely on your matrix operator on your calculator to help you get the answer.

S^{24}=\begin{bmatrix}\frac{3}{4} &\frac{1}{3} \\ \frac{1}{4} & \frac{2}{3}\end{bmatrix} \times \left|\begin{array}{cc}0\\ 1\end{array}\right|=\left|\begin{array}{cc}0.5714\\ 0.4285\end{array}\right|

As we want the state zero correct answer, our final answer it 0.5714.

It is really important to define both the state one and state zero so you and the marker know what you are doing. Also if the question doesn’t ask for a series of markov chains to be done by hand, use your calculator to help you out.