Introduction

In this lecture, we approach the statistical mechanics and computational physics of the Ising model. This archetypal physical system undergoes an order-disorder phase transition. The Ising model shares this transition and many other properties with more complicated models which cannot be analyzed so well.

Definition

Consider spins σ = +/- 1 on a lattice of N sites k=0... N-1, say, in two dimensions. Spins prefer to be aligned with each other, and the energy is written as:

with positive J, which we take equal to 1.

The partition function of the Ising model is given by


Note that the first equation expresses the partition function as a sum over the 2N spin configurations, and the second equation as a sum over the ~N energies, using the density of states

The analytical and numerical approaches for the Ising model often aim at computing the equation of state.

Physical observables, as the energy, the free energy, the specific heat, etc, can also be expressed easily, for example, for the internal energy, we have:


and


This gives for the specific heat:

and



Description of the lattice

We must clarify how to represent the lattice sites. In one dimension, this is easy. On L×L×... lattices, rather than identifying each site through the coordinates (k_x,k_y,...), it is better to think of them as sites 0... N-1, withthe geometry defined through a "neighbor" function. In Python, this can be easily done through dictionaries. Programs written in this way are largely independent of the underlying lattice.

Consider the following function which determines two dictionaries (site_dic and x_y_dic) and a tuple of tuples called nbr (for neighbors).
The dictionary site_dic and x_y_dic perform the calculations between rows and columns and nbr allows us to determine which sites of the lattice are the neighbors (right, up, left, down) of the current site.
def square_neighbors(L):
    N = L * L
    site_dic = {}
    x_y_dic = {}
    for j in range(N):
        row = j // L
        column = j - row * L
        site_dic[(row, column)] = j
        x_y_dic[j] = (row, column)
        nbr = []
    for j in range(N):
        row, column = x_y_dic[j]
        right_nbr = site_dic[row, (column + 1) % L]
        up_nbr = site_dic[(row + 1) % L, column]
        left_nbr = site_dic[row, (column - 1 + L) % L]
        down_nbr = site_dic[(row - 1 + L) % L, column]
        nbr.append((right_nbr, up_nbr, left_nbr, down_nbr))
    nbr = tuple(nbr)
    return nbr, site_dic, x_y_dic

Plotting configurations

Here's an idea for plotting configurations ...


figure()
x_plus=[]
y_plus=[]
x_minus=[]
y_minus=[]
for i in range(N):
    x, y = x_y_dic[i]
    if S[i] == 1:
       x_plus.append(x)
       y_plus.append(y)
    else:
        x_minus.append(x)
        y_minus.append(y)
axis('scaled')
axis([-0.5, L -0.5, -0.5, L - 0.5])
plot(x_plus, y_plus, 'r+', markersize=10)
plot(x_minus, y_minus, 'bx', markersize =10)
savefig('test2.eps')
show()

Enumerations (Basic algorithms)

A most straightforward approach to the Ising model consists in enumerating all configurations.

Binary enumeration

For the 2x2 lattice, this can be done on a piece of paper:

OD_ising_2x2_binary.jpg
Enumeration of the 16 configurations of a 2x2 Ising model


A one-line Python program generates these configurations, if we identify a "-" spin with a "0" bit and a "+" spin with a "1" bit, in the binary representation of the numbers 0 ... N-1:
for k in range(16): print bin(k)
From these configurations, we best determine the density of states and then all the observables we want. We can determine the properties at different temperatures without rerunning expensive calculations.

Gray code enumeration

Even on the level of basic enumerations, there is room for improvement, because the above sequence, based on the binary representation of the numbers k = 0 ... N-1 is not unique, and it is not the best one. Consider the following Gray-code enumeration]:
OD_ising_2x2_gray.jpg
700px|left|Gray-code enumeration of the 16 configurations of a 2x2 Ising model



To understand how this enumeration works and why it is interesting, it is best to write out each configuration on a single line:
OD_gray_code_4.jpg
Gray-code enumeration of the 16 configurations of a 2x2 Ising model.



We notice that the word "enumeration" has two meanings: it can refer to the ''listing'' of items (as in our example), or to the ''counting''. The famous Onsager solution of the Ising model, in the formulation of Kac and Ward, is such a counting method, see <ref name="SMAC">W. Krauth ''Statistical mechanics: Algorithms and Computations'' (Oxford University Press, 2006) See also [http://www.smac.lps.ens.fr/ the book's website]</ref>, sect. 5.1.4.

Metropolis algorithm

Here is a simple implementation of the Metropolis algorithm (<ref name="SMAC">W. Krauth ''Statistical mechanics: Algorithms and Computations'' (Oxford University Press, 2006) See also [http://www.smac.lps.ens.fr/ the book's website]</ref>, algorithm 5.7, p. 250).
A basic task in statistical physics is to write a local Metropolis algorithm for the Ising model. This program is even simpler than a basic Markov-chain simulation for hard disks, but the Ising model has a less immediate connection with classical mechanics (there is no molecular dynamics algorithm for the model). Its phase transition is better understood than that of hard disks, and the results can be compared with exact solutions in two dimensions, even for finite systems, so that very fine tests of the algorithm on lattices of any size are possible. Analogously to the hard-disk Markov-chain algorithm, we randomly pick a site and attempt to flip the spin at that site.

OD_ising_3x3_metropolis.jpg
Metropolis algorithm for the Ising model.

...
L = 32
N = L * L
S = [choice([-1, 1]) for k in range(N)]
beta = 0.42
nbr, site_dic, x_y_dic = square_neighbors(L)
for i_sweep in range(100):
    for iter in range(N):
        k = randint(0,N-1)
        h = sum(S[nbr[k][j]] for j in range(4))
Delta_E = 2.0 * h * S[k]
Upsilon=exp(- beta * Delta_E)
if ran(0.0, 1.0) < Upsilon: S[k] = -S[k]

The problem with the local Monte Carlo algorithm is the correlation time. This can be most easily be understood by looking at the histogram of the magnetization of the system. At high temperature, it is strongly peaked around zero, at low temperatures, it is bi-modal, but close to the critical point, it explores a region which is of the order of ΔM ~N (without going into details).
OD_markov_ising_trajectory.jpg
Trajectory of the magnetization of a 16x16 Ising model for 10^6 iterations of the Metropolis algorithm. There are of the order of 10^6/256^2 ~ 15 independent configurations.


Cluster algorithms

Single-spin-flip Monte Carlo algorithms are slow close to T_c, because the histogram of essential values of the magnetization is wide and the step width of the magnetization is small. To sample faster, we must foster moves which change the magnetization by more than +/- 2. However, using the single-spin-flip algorithm in parallel, on several sites at a time, only hikes up the rejection rate. Neither can we, so to speak, solidly connect all neighboring spins of the same orientation and flip them all at once. Doing so would quickly lead to a perfectly aligned state, from which there would be no escape.

Swendsen and Wang [1]
and Wolff [2]
have proposed a cluster algorithm, which has been the role models for about a generation of research.
Starting from an initial site k, one adds neighboring sites with probability p until the construction stops. This generates the move.
OD_spins_wolff.jpg
Move of the Wolff algorithm. Neighboring sites are added with probability p to the cluster, if they have the correct sign.


We present and implement the Wolff Cluster algorithm for the Ising model. (SMAC algorithm 5.9, p. 257). Note that the Pocket and the Cluster could be taken as sets, rather than lists. Note also that k does not need to be a random element of the Pocket. Any element will do.

(add the above function for the neighbors...)
L = 32
N = L * L
S = [choice([-1, 1]) for k in range(N)]
beta = 0.4407
p = 1.0 - exp(-2.0 * beta)
nbr, site_dic, x_y_dic = square_neighbors(L)
for iter in range(100):
    k = randint(0, N - 1)
    Pocket = [k]
    Cluster = [k]
    while Pocket != []:
        k = choice(Pocket)
        for l in nbr[k]:
            if S[l] == S[k] and l not in Cluster and ran(0,1) < p:
                Pocket.append(l)
                Cluster.append(l)
        Pocket.remove(k)
    for k in Cluster: S[k] = - S[k]
    print iter, N_cluster



References


  1. ^ R. H. Swendsen, J. S. Wang (1987) Nonuniversal critical-dynamics in
    Monte-Carlo simulations, Physical Review Letters 58, 86–88
  2. ^ U. Wolff (1989) Collective Monte-Carlo updating for spin systems, Physical Review Letters 62, 361–364