DM 1: Markov Chains: the Monte Carlo Markov Chain and the Markov matrix
Don't hesitate to ask questions and make remarks on this wiki page.

This exercise will allow you to familiarize yourselves with the Markov Chains.
  • In Part 1 we go back to the heliport and discover the « 1/2 thumb rule », which is an empirical – yet very important – rule for successful Monte Carlo Markov Chain algorithms.
  • In Part 2 we study the scaling of the correlation time of a Markov Chain. We consider systems of particles hopping on a lattice and construct the associated Markov matrix

Basic concepts: Detailed and Global Balance

We would like to sample a configuration C with a given probability πC. A solution for this problem is the Markov Chain Monte Carlo method. We define a dynamics over the ensemble of configurations {C} such that, in the long time limit each configuration is visited with the correct probability
πC. As discussed in the lecture, a Markov chain describes the external image arrow-10x10.png evolution which is fully specified by the transition probabilities

between configurations C and C'. The probability πC(t) of being at time t in configuration C evolves according to

To ensure that πC(t) converges towards the probability distribution πC, one must satisfy the global balance condition:

The detailed balance is more restrictive:

but, if verified by the transition probability, it leads towards the same probability distribution in the long-time limit.

Part 1 - The heliport and the 1/2 thumb rule [GROUP 1]

Consider the following Markov chain Monte Carlo algorithm:
import random
def markov_pi(N, delta):
    x, y = 1.0, 1.0
    n_hits = 0
    for i in range(N):
        del_x, del_y = random.uniform(-delta, delta), random.uniform(-delta, delta)
        if abs(x + del_x) < 1.0 and abs(y + del_y) < 1.0:
           x, y = x + del_x, y + del_y
        if x**2 + y**2 < 1.0: n_hits += 1
    return n_hits
n_runs = 1000
n_trials = 4000
delta = 0.1
for run in range(n_runs):
   print 4.0 * markov_pi(n_trials, delta) / float(n_trials)

  1. Run the program and convince yourself that for large enough n_trials, the output of each run converges to the value of π. In the following, set the n_runs=500 and n_trials=1000. Modify the program so that it calculates the acceptance ratio, which corresponds to the number of moves that stay inside the square, divided by n_trials. Run the new program and compute the acceptance ratio for the following values of delta (delta=0.1,delta=0.2,delta=0.3, ..., delta=5.0). Plot the computed acceptance ratios as a function of delta using a linear scaling on all axes.
  2. The 1/2 thumb rule predicts that the best performance of a Markov chain Monte Carlo is expected for an acceptance ratio of approximately 1/2. Find the delta-interval in which the acceptance ratio equals 1/2. Example: If the interval where you expect the acceptance ratio of 1/2 is between 3.5 and 3.6, you write "3.5-3.6". Intervals longer than 0.1 are considered as wrong results.
  3. Study the performance of the algorithm as a function of delta. Modify the program so that it calculates the standard deviation of the n_runs data for values of delta=0.1, 0.2,...,5. Plot the standard deviation as a function of delta using linear scaling. Comment your results with one or two sentences.

N.B. We are asking you to produce a scientific diagram. The general criteria for the quality of diagrams in this lecture are the following. Each diagram can give a total of 2 points:
  • 1 point is given if the diagram meets the following criteria of graphical quality: (i) The diagram shows exactly the quantities that were asked for (ii) All axes are correctly labelled (iii) All axes have the correct scaling (linear, logarithmic, semi-logarithmic...) (iv) The diagram shows at least the range of data points that was asked for.
  • 1 point is given if the diagram represents the correct results.
If the plot meets the criteria of graphical quality and shows the correct results, you will earn a total of 2 points.



Part 2 - Hopping on a lattice: the correlation time

As an example we consider a particle on a system of L sites and an hard wall at sites 0 and L-1. At the equilibrium the particle occupies site i with probability πi = 1/L. To sample a configuration according to that probability, a possibility is to implement a dynamics in which the particle jumps from one site to one of its neighboring sites with a jump probability verifying detailed balance with respect to πi .
CFP_2010_diffusion_model.png
A particle hopping on a one-dimensional lattice with L sites .

The ensemble of all transition probabilities can be represented in the Markov matrix of the system. The function matrix_calc computes this matrix. We introduce a new package: numpy. This package is very useful for handling matrices, linear algebra operations...
import math, numpy
def matrix_calc(L):
    neighbor = [[min(k+1,L-1),max(k-1,0)] for k in range(L)]
    hopping1d = numpy.zeros((L,L))
    for k in range(L):
        hopping1d[neighbor[k][0],k] = 0.5
        hopping1d[neighbor[k][1],k] = 0.5
    return hopping1d
As you discussed in Class Session 1 the Markov matrix controls the time evolution of all possible simulations performed with Markov Chain algorithms. The eigenvalues and eigenvectors of the Markov matrix can be computed using the package numpy.
(Remark: you have to copy-and-paste the function transfer_calc in the previous program to get a single file)
import pylab, math, numpy
L = 4
matrix = matrix_calc(L)
eigenvalues,eigenvectors = numpy.linalg.eig(matrix)
print eigenvalues
#print eigenvectors[:,0]
 
  1. We call λ2 the second largest eigenvalue in absolute value. Explain why the time scale Δ = -1 / (log |λ2|) gives the relaxation time towards equilibrium.
  2. Plot the behavior of Δ as a function of L (L=4,8,16,...,256) in logarithmic scale. How fast does the correlation time grow with L? [Tip: it can be useful to use eigenvalues.sort() in order to identify the second largest eigenvalue]
  3. Modify the function matrix_calc in order to implement periodic boundary conditions. If you repeat the simulations, the equilibrium is not reached. Explain why and propose a possible solution to fix this problem.
  4. Below we modify the function matrix_calc. Which hopping problem are we implementing?
  5. Compute the size of the Markov matrix as a function of L and the dimension of the lattice.
  6. Plot the behavior of Δ as a function of L (L=4,8,16,...,64) in logarithmic scale.[Maybe L=64 gives a too large matrix for your computer. If this is the case Plot L=4,8,16,32] How fast the does the correlation time grow with L?
import math, numpy
def matrix_calc2(L):
    Nsites=L*L
    hopping2d = numpy.zeros((Nsites,Nsites))
    for k in range(Nsites):
        kx=k%L
        ky=k/L
        hopping2d[ky*L+min(kx+1,L-1),k] += 0.25
        hopping2d[ky*L+max(kx-1,0),k] += 0.25
        hopping2d[min(ky+1,L-1)*L+kx,k] += 0.25
        hopping2d[max(ky-1,0)*L+kx,k] += 0.25
    return hopping2d

More difficult: many particles on a lattice [-This part will not be evaluated-]

Consider a one dimensional system of 4 sites and 2 particles. Each site is either empty or occupied by a single particle. Each particle can jump to its right or its left, provided the target site is empty (this rule defines the Simple Symmetric Exclusion Process, or SSEP). A hard wall is placed at sites 0 and at site 3 (boundary condition are not periodic): when a particle is in 0, it can only stay in 0 or jump to 1 if this site is empty ; similar rules applies at site 3. Below we implement a Markov chain Monte Carlo corresponding to this process.
import random
random.seed(2)
L = 4
n_trials = 100000
state = [0 for k in range(L)]
state[0],state[1] = 1,1
for iter in range(Ntime):
   k = random.randint(0, L-2)
   state[k], state[k+1] = state[k+1], state[k]
  1. Determine if detailed balance is respected in presence of two particles.
  2. Explain how to generalize this algorithm to the case of a system of L sites and n particles following the SSEP rules. How does the complexity of the algorithm (i.e. the number of operations) grow as a function of L and n?
  3. Write explicitly the Markov matrix associated to the SSEP for L=4 and n=2 (be careful to impose the correct normalization)
  4. Compute the size of the associated Markov matrix as a function of L and n. Comment about the possibilities, in practice, to use the Markov matrix and to use the Markov Chain Monte Carlo in complex systems.

print this page