# Direct sampling (children's game)

 Children playing at the Monte Carlo beach

The game takes place on the beach of Monte Carlo. The pebbles are samples of the uniform probability distribution in the square. They are obtained directly. It is for this reason that the algorithm is called "direct-sampling" Monte Carlo.

## Direct sampling (children's game) in Python

from random import uniform
def direct_pi(N):
n_hits = 0
for i in range(N):
x, y = uniform(-1.0, 1.0), uniform(-1.0, 1.0)
if x ** 2 + y ** 2 < 1.0:
n_hits += 1
return n_hits

n_trials = 10000
for attempt in range(10):
print attempt, 4 * direct_pi(n_trials) / float(n_trials)

## Direct sampling (children's game), output

Here is the output of a modified program, for which the pebbles are colored according to whether they fell inside the circle or not. See here for modified program
 Output of the direct-sampling program, slightly modified by coloring the "hits" in red and the "non-hits" in blue.

## Comments

1/ Monte Carlo is an integration algorithm.The above direct-sampling algorithms treats a probability distribution (uniform distribution of pebbles within the square), and an observable, the "hitting variable" (one within the unit circle, zero outside):

$\frac{n_\text{hits} } {N} \sim \frac{ \int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) O(x,y) } { \int_{-1}^{1} dx \int_{-1}^{1} dy \pi(x,y) }$

2/ Direct-sampling algorithms exist only for a handful of physically interesting models. They are very useful
3/ The existence of a uniform (pseudo) random number generator is assumed. The setup of good random number generators is a mature branch of mathematics. For rough details, see W. Krauth's book.

# Markov-chain sampling (adults' game)

 Adults playing on the Monte Carlo heliport

## Markov-chain sampling (adults' game) in Python

from random import uniform
def markov_pi(delta, N):
x, y = 1.0, 1.0
N_hits = 0
for i in range(N):
del_x, del_y = uniform(-delta, delta), uniform(-delta, delta)
if abs(x + del_x) < 1.0 and abs( y + del_y ) < 1.0:
x, y = x + del_x, y + del_y
if x**2 + y**2 < 1.0:
N_hits += 1.0
return N_hits

n_trials = 10000
for k in range(10):
print 4 * markov_pi(0.3, n_trials) / float(n_trials)


## Comments

1. This is a Markov-chain sampling algorithm for the pebble game.
2. Note that we start from a non-thermalized position (x,y) = (1,1), the "club house"
3. The above algorithm is correct for all step sizes delta,but they are not all equally good.

## Homogeneous 3x3 Pebble game (detailed balance)

We consider the 3x3 pebble game: one particle moving on a 3x3-chessboard without periodic boundary conditions. We design a local algorithm (discrete version of the adults' game), so that each site is visited with the same probability 1/9:
$\pi(1) = \dot s= \pi(9) = \frac{1}{9}$
The discretized version of the adults' random pebble throw consists in moving from a site to each of its neighbors with probability 1/4.
Suppose we are on site a, at one time. We can only move to b or c, or simply remain at a. This gives

$p_{a \to a} + p_{a \to b} + p_{a \to c} = 1$

On the same time, to get to a, we either come from a, or from b or from c.
$\pi(a)p(a \to a) + \pi(b) p(b\to a) + \pi(c) p(c \to a) = \pi(a)$
This yields the '''global balance condition'''
$\pi(b) p(b\to a) + \pi(c) p(c \to a) = \pi(a) p(a\to b) + \pi(a) p(a \to c)$
A more restrictive condition is called '''detailed balance condition''':
$\pi(b) p(b\to a) = \pi(a) p(a\to b), \text{etc.}$

Below a Python implementation for the 3x3 pebble game. With positions 1,2,...,9, the four neighbors of site 1 are (2,4,1,1). This ensures that the pebble moves with probability 1/4 to sites 2 and 4, and remains on site 1 with probability 1/2. We start the simulation from site 9.
import random, pylab
neighbor = {1 : [2, 4, 1, 1], 2 : [3, 5, 1, 2], 3 : [3, 6, 2, 3],
4 : [5, 7, 4, 1], 5 : [6, 8, 4, 2], 6 : [6, 9, 5, 3],
7 : [8, 7, 7, 4], 8 : [9, 8, 7, 5], 9 : [9, 9, 8, 6]}
all_pos = []
N_iter = 100
for iter1 in range(10000):
pos = 9
for iter in range(N_iter):
pos = neighbor[ pos][ random.randint(0, 3)]
all_pos.append(pos)
pylab.figure(1)
pylab.hist(all_pos,bins=9,range=(0.5,9.5),normed=True)
pylab.title('3x3 pebble game, starting at 9, after '+str(N_iter)+' steps')
pylab.savefig('histo_3x3_'+str(N_iter)+'_steps.png')
pylab.show()

Here is output of the above Python program for 5, 10, 100 steps

## Inhomogeneous 3x3 pebble game (Metropolis algorithm)

For a general probability distribution
$\pi(1),\pi(2), \dots, \pi(9),$
we can use the celebrated Metropolis algorithm
$p(a \to b) = \min(1, \pi(b)/\pi(a) )$
That we illustrate in a Python program for the inhomogeneous 3x3 pebble game.

 Stationary distribution (red dots) and Monte Carlo histogram (blue) for the inhomogeneous 3x3 pebble game

## Transfer matrix

We return to the case of the homogeneous pebble game.
The hopping probabilities from site i to site j form a 9x9-transfer matrix.
$\{p(a\rightarrow b)\} = \frac{1}{4} \begin{bmatrix} 2 & 1 & 0 & 1 & 0 & 0 & 0 & 0 & 0\\ 1 & 1 & 1 & 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 1 & 2 & 0 & 0 & 1 & 0 & 0 & 0\\ 1 & 0 & 0 & 1 & 1 & 0 & 1 & 0 & 0\\ 0 & 1 & 0 & 1 & 0 & 1 & 0 & 1 & 0\\ 0 & 0 & 1 & 0 & 1 & 1 & 0 & 0 & 1\\ 0 & 0 & 0 & 1 & 0 & 0 & 2 & 1 & 0\\ 0 & 0 & 0 & 0 & 1 & 0 & 1 & 1 & 1\\ 0 & 0 & 0 & 0 & 0 & 1 & 0 & 1 & 2 \end{bmatrix}$

The largest eigenvalue of this matrix is
$\lambda_0=1$
 Approach to equilibrium for the 3x3 pebble game compared to the decay with the second-largest eigenvalue

$\lambda_1=0.75$

### Exponential convergence

Event though this is not the full story[1] , the convergence of a Markov chain is governed by the second eigenvalue of its transfer matrix. To show this, we plot how the probability to be at site 1 (lower left corner) at iteration i, having started from site 9 (upper right corner), approaches the stationary value 1/9.

### Validity of a Monte Carlo algorithm

One often hears that for a Monte Carlo algorithm to be valid (that is, to converge towards the stationary distribution), it must
1. satisfy global balance (or detailed balance0.
2. be ergodic

The mathematically rigorous sufficient condition for the steady state (the eigenvector of eigenvalue 1) to be unique is that the chain is '''irreducible''' (one can go from any state to any other) and '''aperiodic''' (i. e. there exists no states to which one returns ''with probability one'' when starting from it). Under such conditions, starting from any normalized probability distribution, one converges at infinite time to the steady state.

Let us illustrate this point in a two-site problem

$\{p(a\rightarrow b)\}= \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$

Here, eigenvalues are 1 and -1, all states are periodic and one does not converge to the eigenvector of eigenvalue 1.

The 3-site transfer matrix
$\{p(a\rightarrow b)\}= \begin{pmatrix} 0&1&0 \\ 0&0&1 \\ 1 &0&0 \end{pmatrix}$
is an example of a periodic stochastic matrix with complex eigenvalues (of modulus 1).

Negative eigenvalues of the transfer matrix describe oscillations between configurations. This is seen in the above 2x2 matrix. If you start on site 1 at time t=0, you have probability 0 to be on site 1 at odd times t=1,3,..., and probability 0 to be on site 2 at even times t=0,2,4.... So we don't really reach equilibrium. We now modify the above example into
$\{p(a\rightarrow b)\} = \begin{pmatrix} 0.01 & 0.99 \\ 0.99 & 0.01 \end{pmatrix}$
Now, one eigenvalue equals 1, the stationary solution, and one eigenvalue equals -0.98, describing the sloshing back and forth between site 1 and site 2.

# Comments

1. Markov-chain Monte Carlo algorithms are a very general tool for integration.
2. They access the relevant information in the infinite-time limit , but one can do better (This is the subject, among others, of "perfect sampling" see TD).
3. The dynamics of the Markov-chain Monte Carlo algorithm is not always physically relevant.
4. Many Markov-chain Monte Carlo algorithms satisfy detailed balance, but the full condition is global balance.

# References

1. ^ P. Diaconis ''The Mathematics of mixing things up'' Journal of Statistical Physics 144, 445(2011) [http://www-stat.stanford.edu/~cgates/PERSI/papers/mixing.pdf]