Ising Models, what are they?

From physics to computer science

icing

Not to be confused with an icing machine, an Ising machine is a computational model that has gained a lot of popularity and traction in recent years for showing the potential of solving previously intractable problems. But what exactly is an Ising machine, an Ising model, and are they actually new?

What are Ising models?

Ising models were first developed in the early 20th century as a model for magnetism (this paper, if you are curious and can read German). Ising models have the advantage of being conceptually simple, but being able to describe complex systems. A physical system can be completely described by giving an expression for its energy. An Ising model consists of “spins” which can take values of +1+1 or 1-1. The energy of an Ising model is written in terms of whether spins are the same or different, which can be written as products of these terms. The energy can mathematically be written as, 

E=i,jJijσiσjE=\sum_{i,j}J_{ij}\sigma_i\sigma_j,

where J is a matrix to keep track of the interactions and the subscripts are used to keep track of different variables. Early physics studies were interested in highly structured Ising models, for example where all the spins are arranged in a line (the original paper on the topic), or on a grid with equal strengths (the matrix JJ). These models were very helpful for understanding the collective behavior of simple physical systems, but are not very interesting from a computing perspective. Where Ising models become interesting for computing is when the interactions, described by the matrix JJ, are allowed to be arbitrary. In this case, something interesting happens when these arbitrary Ising models can be mapped to hard combinatorial optimization problems. This is a particularly interesting class of problems known as NP-hard optimization problems. The simplest example here is to note that if we place JJ values where edges exist in a graph, and all zeros elsewhere, then the lowest energy state of an Ising model will be the maximum cut of the graph, one of the early problems shown to be NP-hard.

This means that Ising models can be used as a description of hard combinatorial optimization problems. It also turns out that physical systems tend to naturally yield interactions based on variables being the same or different. Thus, Ising models provide us with a tool that on one hand can be used to map important optimization problems, while on the other hand can be engineered into physical systems. For this reason, Ising models naturally arise when using quantum systems to solve optimization problems. An interesting question (and one which we will not fully answer here) is why Ising models tend to arise when deriving physical models, while the QUBO model we discuss later tends not to. One answer is that the Ising model is symmetric in terms of inverting the +1+1 and 1-1 values, and physical laws tend to be derived from symmetries (the QUBO model will not be symmetric in this way as we will see), symmetries also tend to make physics calculations within the model easier. One concrete consequence here is nice mathematical relationships with other mathematical objects used in quantum mechanics, known as the Pauli matrices. Another reason could be psychological, since the people who are deriving models tend to be physicists and physicists are familiar with Ising models, they tend to make choices in derivations that yield Ising expressions as opposed to other equivalent models. In truth, the reason is probably a mixture of the physical/mathematical and psychological explanation, but what matters is that in practice physical systems tend to be expressed in terms of Ising models.

 A slightly more general version of the Ising model is often used in practice, which includes terms that penalize values of individual spins

E=i,jJijσiσj+ihiσiE=\sum_{i,j}J_{ij}\sigma_i\sigma_j+\sum_ih_i\sigma_i,

where h is a vector. Ising models are a natural interface between physics and computer science, they can describe computing in a way which is physically natural.

Quadratic Unconstrained Binary Optimization models

One of the most useful mappings of Ising models is to a class of models known as quadratic unconstrained binary optimization (QUBO) models. These models are almost identical to Ising models except that they involve variables xx, which take the value 00 or 11, and take the form 

E=i,jQijxixjE=\sum_{i,j} Q_{ij} x_i x_j \rightarrow

i,j14Qij(σiσjσiσj+1)=\sum_{i,j} \frac{1}{4} Q_{ij} \left(\sigma_i \sigma_j-\sigma_i-\sigma_j+1 \right)=

14(i,jQijσiσji,j(Qij+Qji)σi+i,jQij)\frac{1}{4}( \sum_{i,j}Q_{ij}\sigma_i\sigma_j-\sum_{i,j}(Q_{ij}+Q_{ji} )\sigma_i+\sum_{i,j}Q_{ij} ).

Since we can use the mapping x=(1σ)/2x=(1-\sigma)/2  it is clear that a QUBO can be mapped into an Ising (at least the more general version involving hh) and vice-versa. We show the mapping from QUBO to Ising in the above equation and it is important to note that adding constants to the energy has no effect on which state has the minimum energy. Therefore, these can be ignored in practice. Conceptually, the QUBO interactions only add to the energy if both take the 11 value. Thus, 0000, 0101, and 1010, are equivalent and this interaction is somewhat natural in computing (since penalizing 1111 is effectively a disjunctive clause with two negated literals), but does not tend to arise physically.  This QUBO mapping is useful and may be a preferred way to program for many users (our software provides both formulations). Since the Ising formulation more closely describes what the hardware is doing, it is useful to understand how the device behaves on different problems. For example, the dynamic range (the ratio of the largest and smallest magnitude non-zero coupling) will generally be different in the QUBO and Ising settings. However, the Ising value is the one that is actually relevant for device performance. 

Spin Glass

The physics of Ising models has been well studied, one particular context is the setting where couplings are chosen randomly. The behavior of these systems gives some interesting insight into hard optimization problems. These systems exhibit what is known as glassy behavior, which means that once they cool down below a certain temperature, they tend to become stuck in what are known as local energy minima. These are configurations where small changes cannot lower the energy, but large changes could find a better solution. This behavior is called “glassy” because it is exactly how physical glasses (for example window glass) behave. These random Ising systems are often called “spin glasses” (lecture notes on glass physics can be found here). This behavior has a deep connection to problems being difficult to solve. If we could easily keep lowering the energy of any spin system by cooling, then any hard optimization problem could be easily solved by mapping to an Ising system and cooling it.  On the other hand, conventional thinking in computer science is that certain problems should be hard to solve using any method (this is the famous PNPP\neq NP conjecture). These transitions to glassy states are how these problems become hard to solve for physical systems or algorithms that simulate physical systems like simulated annealing. 

To put it another way, since we have established that optimization problems can be mapped to physical systems, the conjecture that these problems would be hard for all computational methods puts constraints on the physics of the system. This point is worth discussing in-depth, since it probably seems strange to think of a conjecture about computer science having consequences for the behavior of physical systems, but in this case, this is exactly what is happening. To see why, let us consider a counter-example. Let us assume for the moment that Ising systems do not ever behave as spin glasses, but instead can quickly (in a time which only grows relatively slowly with the number of spins, for the exact sense see the P versus NP page) reach thermal equilibrium at any temperature. We then can map an NP-hard problem to the system. In this mapping, the lowest energy state of the physical system uniquely corresponds to the solution of the optimization problem. Therefore if we can simulate cooling to a very low temperature quickly, then we can solve the optimization problem quickly. If we view this simulation as an optimization algorithm, then this corresponds to solving the problem quickly since at a low enough temperature the system will be in a state corresponding to the optimal solution with a very high probability. If it could reach equilibrium at a low temperature quickly, our simulation would have shown that P=NPP=NP. The only two potential ways out of showing P=NPP=NP would be if we couldn’t simulate the system efficiently (which we know isn’t true because classical spin systems are relatively easy to simulate), or if the physics of the system means that it takes a very long time for the real physical system being simulated to reach thermal equilibrium. This is what actually happens in practice as equilibration times much longer than the time the universe has existed are not uncommon. While glasses (including the silicate ones we use to build our houses and drink soda out of), seem fairly mundane and boring, they are actually fascinating physical systems, which in practice are never truly in thermal equilibrium. These humble materials have rich and poorly understood physics and deep connections to fundamental issues in computer science.

Conclusion

At a high level, quantum mechanics doesn’t actually change this picture much. While quantum mechanics is likely to be able to find better solutions faster (this has been observed in condensed matter systems) for combinatorial optimization problems, the general consensus is that it will not make these problems easy. Furthermore, while quantum mechanics is not generally easy to simulate classically, physicists have shown that quantum systems do indeed also exhibit spin glass phases. However, quantum mechanics does have the promise of allowing the equilibration to occur much faster and greatly improve performance. This broadly agrees with the theoretical computer science perspective that quantum hardware may be revolutionary in making the solving of optimisation much faster, but the problems will still be hard.