## Glauber dynamics for the Ising chain

The goal of this post is to describe a model of a heat bath for the $d=1$ Ising model. In equilibrium statistical physics, a heat bath is nothing more than a parameter $\beta=1/T$ which tells us which of the Boltzmann distributions $P_\beta(s)\propto e^{-\beta H(s)}$ to study for a particular system with Hamiltonian $H$. This neither explains why such a state is reasonable to study, nor provides any information about how such a state is reached from a non-equilibrium state (vague invocations of entropy and ergodicity notwithstanding). I’ll introduce the Glauber dynamics, which reproduce the Boltzmann distributions as unique steady states of a Markov chain (with a caveat at $T=0$). A conceptually interesting consequence of this framework is that it forces the realization that a heat bath is not a heat bath is not a heat bath. A black box that, given an Ising chain, implements the Glauber dynamics will cause equilibration of the Ising chain, but may not do the same to some other physical system. A model of a bath must be tailored to a specific physical system.

The Glauber dynamics are related to the Metropolis algorithms used for sampling from probability distributions. This is an interesting connection that I won’t explore here.

1. Ising chain and Glauber dynamics

Consider a classical spin chain with $N$ sites and periodic boundary conditions. In other words, consider $N$ variables $s_i$, $i=1,2,3,\ldots$ taking values $s_i=\pm 1$ with $s_{N+i}=s_i$. Define the Ising Hamiltonian

\begin{aligned} H=-J\sum_{i=1}^{N}s_is_{i+1}. \end{aligned}

At each time step, pick a site $j$ uniformly at random from $1$ to $N$. Let $\mathbf{s}$ be the current state of the system and let $\mathbf{s}_j'$ be the state of the system resulting from $\mathbf{s}$ via flipping the spin at site $j$, i.e. from the replacement $s_j\rightarrow -s_j$. Flip the spin at site $j$ with probability given by

\begin{aligned} w(\mathbf{s}\rightarrow\mathbf{s}_j')=\frac{\alpha}{2}\left[1-\frac{\gamma}{2}s_j(s_{j-1}+s_{j+1})\right] \end{aligned}

for some value of $\gamma$ yet to be specified. Note that this is indeed a Markov model. Incorporating the initial choice of $j$, the overall probability for the transition $\mathbf{s}\rightarrow\mathbf{s}_j'$ is just $w(\mathbf{s}\rightarrow\mathbf{s}_j')/N$.

2. The Boltzmann distribution is a steady state of Glauber dynamics
Detailed balance with respect to the Boltzmann distribution $P_\beta$ at inverse temperature $\beta$ is the condition

\begin{aligned} \frac{w(\mathbf{s}\rightarrow\mathbf{s}_j')}{w(\mathbf{s}_j'\rightarrow\mathbf{s})}=\frac{P_\beta(\mathbf{s}'_j)}{P_\beta(\mathbf{s})}=\frac{e^{-\beta H(\mathbf{s}_j')}}{e^{-\beta H(\mathbf{s})}}. \end{aligned}

If the parameters $w$ satisfy this condition, then $P_\beta$ is an eigenvector of the dynamics, and we’ll have shown that the Glauber dynamics at least preserve equilibrium (later we’ll see that, for $T>0$, they also drive a system to equilibrium). The right-hand side can be expanded:

\begin{aligned} e^{-\beta(H(\mathbf{s}_j')-H(\mathbf{s}))}&=\exp\left[\beta J\left(\sum_{i=1}^N(-1)^{\delta_{i,j}\delta_{i+1,j}}s_is_{i+1}-\sum_{i=1}^Ns_is_{i+1}\right)\right]=\exp\left[-2\beta J\left(s_{j-1}s_j+s_js_{j+1}\right)\right]\\ &=e^{-2\beta Js_j(s_{j-1}+s_{j+1})}=\frac{1-\tanh \left[\beta Js_j(s_{j-1}+s_{j+1})\right]}{1-\tanh \left[\beta Js_j(s_{j-1}+s_{j+1})\right]} \end{aligned}

where the last equality is just $e^{-2x}=(1-\tanh x)/(1+\tanh x)$. Recognizing that $s_j(s_{j-1}+s_{j+1})/2$ takes values $0,\pm 1$ and that $\tanh 0=0$ and $\tanh (-x)=-\tanh x$, we get finally

\begin{aligned} \frac{w(\mathbf{s}\rightarrow\mathbf{s}_j')}{w(\mathbf{s}_j'\rightarrow\mathbf{s})}=\frac{1-\frac{1}{2}s_j(s_{j-1}+s_{j+1})\tanh 2\beta J}{1-\frac{1}{2}s_j(s_{j-1}+s_{j+1})\tanh2\beta J} \end{aligned}

as the detailed balance condition for $P_\beta$. Referring back to the proposed values of $w(\mathbf{s}\rightarrow\mathbf{s}_j')$ above, we see that this condition is satisfied for

$\gamma = \tanh2\beta J$.

For very high temperatures, $\beta\rightarrow 0$ and $\gamma\rightarrow 0$, so that the dynamics do not distinguish between spin flips that raise the energy of the system and those that lower it. For low temperatures, $\beta\rightarrow\infty$ and $\gamma\rightarrow 1$, so that the dynamics is much more likely to cause a spin flip that reduces the energy of the system than one that raises it. This formalizes the intuition that at low temperatures, energy considerations win, while at high temperatures, entropy dominates.

3. The Boltzmann distribution is the unique steady state for $T>0$

For $T>0$ (finite $\beta$), the Glauber dynamics on the state space of the Ising model forms an irreducible Markov chain with a finite state space (see e.g. Reference 3 below for a detailed overview of Markov chain machinery). Such Markov chains are positive recurrent (they visit each site infinitely many times, with finite mean time between visits) and therefore have unique stationary distributions, which we’ve already shown are the Boltzmann distributions. Moreover, since they are aperiodic, any distribution will converge to this stationary one under repeated application of the transition matrix. Physically, this is the formalization of the notion that any state will thermalize.

For $T=0$, $\gamma=1$, and the Markov chain is \textit{not} irreducible. We are therefore not guaranteed that there is a unique stationary state, and in fact there are continuously many: any probabilistic combination of the two ground states of the Ising model is a stationary solution of the dynamics, since at $T=0$ there is no possibility of a transition out of a ground state.

4. Magnetization dynamics

Up to this point, we’ve really just used the Glauber dynamics as a framework to hold results from equilibrium statistical mechanics (the study of the Boltzmann distributions). Now we’ll see that we can use them to model something new – the fundamentally non-equilibrium dynamics of relaxation to equilibrium.

At time-step $n$, the probability that spin $j$ is flipped is

$\frac{1}{N}\cdot\frac{\alpha}{2}\left[1-\frac{\gamma}{2}s_j(s_{j-1}+s_{j+1})\right]$

so that \textit{given a fixed configuration $\mathbf{s}$ at time $n$}, the expected change in spin $j$ is

$-\frac{\alpha}{N}\left[s_j-\frac{\gamma}{2}(s_{j-1}+s_{j+1})\right]$,

where we’ve used the fact that $s_j^2=1$. Letting $\left\langle\cdot\right\rangle$ denote the expectation over all configurations and $E\left[\cdot\right]$ the expectation over the dynamics of a single time step, we have

$\left\langle E\left[\Delta s_j\right]\right\rangle=-\frac{\alpha}{N}\left[\left\langle s_j\right\rangle-\frac{\gamma}{2}(\left\langle s_{j-1}\right\rangle+\left\langle s_{j+1}\right\rangle)\right].$

Assume now that the system is prepared in a translationally invariant state, i.e. $\left\langle s_j\right\rangle=m$ for all $j$. The dynamics preserve this property, so we can write

$\Delta m=-\frac{\alpha}{N}\left(1-\gamma\right)m.$

For large times (many iterations of the Glauber dynamics) we can approximate this by

$\frac{dm}{dt}=-\frac{\alpha}{N}(1-\gamma)m$,

which yields

$m(t)=m(0)e^{-\alpha(1-\gamma)t/N}m(0).$

Therefore for $T>0$, $\gamma<1$ and the average magnetization decays exponentially in time (since we know the stationary solution is the Boltzmann distribution, the limit $m(\infty)=0$ checks out). For $T=0$, $\gamma=1$ and $m$ is conserved. This is possible due to the fact that there is a path through state space of non-increasing energy from any non-ground state to either ground state. For example, imagine preparing the system in an even distribution over the states with all spins down but one. The energy of such a state may be reduced by aligning the odd spin, but with some probability, the Glauber dynamics will move the two domain walls apart, eventually annihilating them after a full circle so that the system ends up in the spin up ground state.

References

## Long-range order in a scalar field at thermal equilibrium

This calculation works through an example of a system in which long-range order (in a sense made precise below) appears in spatial dimensions $d\geq 3$ but not in dimensions $d=1,2$:

Suppose that a real scalar field $u$ is defined on a $d$-dimensional cube $C$ of side length $L$, with boundary condition $u\vert_{\partial C}\equiv 0$ and an upper bound on the frequency components of the field, as would be the case if we were trying to approximate a scalar field defined on a lattice. Let the system be governed by a Hamiltonian given by

\begin{aligned} H(u)=\frac{c}{2}\int_C\nabla u\cdot\nabla u\hspace{2pt}d^dr. \end{aligned}

At nonzero temperatures, for points $x$ and $y$ close to the center of the cube, the thermal average of the squared field variation has behavior varying with dimension:

\begin{aligned} \left\langle(u(x)-u(y))^2\right\rangle_\beta&\sim\left\vert x-y\right\vert\hspace{20pt}&d=1\\ &\leq \text{const}&d\geq 3 \end{aligned}

so that for dimensions greater than two, the field is roughly constant no matter how far away you look, whereas for dimension one, the thermal fluctuations destroy this long-range order. The $d=2$ case apparently gives logarithmic dependence on separation (see e.g. these lecture notes), but I don’t show it here.

We’ll demonstrate that this behavior exists in a few steps:

1. Mode decomposition of Hamiltonian

We can start by decomposing the scalar field into Fourier modes:

$u(x)=\sum_n\hat{u}_{n}\phi_n(x)\hspace{20pt}\phi_n(x)=\prod_{i=1}^{d}\sin\left(\frac{n_i\pi x_i}{L}\right)$

where $n$ is a $d$-dimensional vector of integers ranging from 1 to some cutoff $N$. In this basis, evaluating the Hamiltonian is simple:

\begin{aligned} H(u)&=-\frac{c}{2}\int_Cu\nabla^2ud^dx=\frac{c\pi^2}{2L^2}\sum_{nm}n^2\hat{u}_n\hat{u}_m\int_C\phi_n\phi_md^dx=\frac{c\pi^2}{2L^2}\sum_{nm}n^2\hat{u}_n\hat{u}_m\left(\frac{L}{2}\right)^d\delta_{nm}\\ &=2^{1-d}c\pi^2L^{d-2}\sum_{n}n^2\hat{u}_n^2. \end{aligned}

The first equality resulted from the given Hamiltonian by integration by parts. The modes are non-interacting, which simplifies the following calculation.

2. Calculation of frequency-space correlation function

Consider the following thermal correlation function:

$\left\langle\hat{u}_{m_1}\hat{u}_{m_1}\right\rangle_\beta=\frac{\left(\prod_n\int_{-\infty}^{\infty}d\hat{u}_n\right)\hat{u}_{m_1}\hat{u}_{m_1}e^{-\beta H(u)}}{\left(\prod_n\int_{-\infty}^{\infty}d\hat{u}_n\right)e^{-\beta H(u)}}.$

By the symmetry of the integrand, it’s clear that the correlation function vanishes unless $m_1=m_2$. In this case:

$\left\langle\hat{u}_{m}^2\right\rangle_\beta=\frac{\int_{-\infty}^{\infty}d\hat{u}_m\hat{u}_m^2e^{-c\pi^2L^{d-2}\beta m^2\hat{u}_m^2/2^{d+1}}}{\int_{-\infty}^{\infty}d\hat{u}_me^{-c\pi^2L^{d-2}\beta m^2\hat{u}_m^2/2^{d+1}}}=\frac{1}{2}\left(c\pi^2L^{d-2}\beta m^2/2^{d+1}\right)^{-1}=\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}m^{-2}L^{2-d},$

so that overall we get

$\left\langle\hat{u}_{m}\hat{u}_{n}\right\rangle_\beta=\delta_{mn}\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}n^{-2}L^{2-d}$.

Since the factorization of the partition function is a direct consequence of the absence of mode-coupling terms in the Hamiltonian, we just had to do a single-mode calculation and then write this answer down.

3. Calculation of average squared disorder

Expanding the squared difference between field values at points $x$ and $y$ in terms of the Fourier modes, we get

\begin{aligned} (u(x)-u(y))^2&=\left(\sum_n\hat{u}_n\left(\phi_n(x)-\phi_n(y)\right)\right)^2\\ &=\sum_{nm}\hat{u}_n\hat{u}_m\left(\phi_n(x)-\phi_n(y)\right)\left(\phi_m(x)-\phi_m(y)\right). \end{aligned}

Taking the expectation using the frequency-space correlation function:

\begin{aligned} \left\langle(u(x)-u(y))^2\right\rangle_\beta&=\sum_{nm}\left(\phi_n(x)-\phi_n(y)\right)\left(\phi_m(x)-\phi_m(y)\right)\delta_{mn}\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}n^{-2}L^{2-d}\\ &=\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}L^{2-d}\sum_{n}\frac{\left(\phi_n(x)-\phi_n(y)\right)^2}{n^2} \end{aligned}

Note that up to this point, everything has been exact (for the continuum theory defined in the problem statement, which itself could be used as an approximation of a lattice system).

4. Approximations for different values of $d$

For $d=1$:
\begin{aligned} \left\langle(u(x)-u(y))^2\right\rangle_\beta&=\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}L^{2-d}\sum_{n=1}^N\frac{\left(\sin\left(\frac{n\pi x}{L}\right)-\sin\left(\frac{n\pi y}{L}\right)\right)^2}{n^2} \end{aligned}

For very large $L$ and $x$, $y$ close to $L/2$ on the scale of $L$, the summand varies slowly in $n$, so we can approximate the sum by an integral. Since for sufficiently large $N$, which assuming a fixed frequency cutoff is achieved by taking $L$ large enough, the integrand is heavily suppressed, we can extend the upper limit to infinity without too much error and evaluate the resulting integral (I did it in Mathematica):

\begin{aligned} \left\langle(u(x)-u(y))^2\right\rangle_\beta&\approx\left(\frac{2^d}{c\pi^2}\right)\beta^{-1}L^{2-d}\int_0^\infty\frac{\left(\sin\left(\frac{n\pi x}{L}\right)-\sin\left(\frac{n\pi y}{L}\right)\right)^2}{n^2}dn\\ &=\left(\frac{2}{L}\right)^{d-1}c^{-1}\beta^{-1}\left\vert x-y\right\vert \end{aligned}

For $d\geq 3$ the cutoff is significant, as the integrand grows with increasing $n$ (due to the factor of $n^{d-1}$ in the measure). The largest the numerator of the integrand can be is four, since it’s the square of a difference between products of sines. Fixing the integrand to its maximum value and integrating up to the cutoff frequency gives an upper bound independent of $x$ and $y$.