# Interaction graph connectivity

## Interactions

One of the most powerful features of entropy quantum computing, and optical devices in general, is the fact that since light particles move, it is relatively easy to couple them to each other arbitrarily. This is well documented and understood in the __literature__, since light is constantly moving and light pulses can easily be divided into parts, realizing full connectivity is conceptually straightforward. An effective scheme can be to simply pull off a small part of one pulse and add it to another (see figure 1 of __this paper__ from 2016 for example). The interactions can also be simulated using classical feedback, by measuring and using a laser to return what the feedback pulse should be. Although such an approach disrupts entanglement, it provides an easy way to experiment with large-scale highly-connected devices. Most, if not all, real industrial optimisation problems have highly connected interactivity graphs, and the overhead of mapping to less-connected hardware can be huge, a quick-back-of-the envelope calculation (explained later in this document) shows that under realistic constraints mapping to a non-fully-connected chip based architecture, means that a highly connected problem with a few hundred variables (still small in terms of optimization problems), will take on the order of 10,000 qubits to map. This ratio gets worse as problems get bigger. More typical 10,000 variable problems would require the order of 100 million physical qubits to map. Our hardware does not have this issue, one qubit is one variable in an optimization problem.

In our paradigm we are allowed to interact any qubit or qudit with any other one. This means that the mapping of problem variables to the physical qubits/dits is a trivial one-to-one mapping. A single binary variable maps to a single qubit, and a single discrete variable maps to a single qudit. If we were restricted in which ones could interact and wanted to solve a problem which is not naturally compatible with those constraints, then the only way to do it is through a one-to-many mapping, where each variable is mapped to many qubits or qudit. There are a few known ways to do this, which we will discuss in passing, but won’t be the focus because they aren’t needed on our hardware. What we will do however is to show that lower connectivity can introduce fundamental limitations in how problems can be mapped. The headline result here is that for realistic restrictions from placement on a two-dimensional chip, the size of a highly-connected problem which can be solved goes as the square root of the size of the device, whereas on our hardware it scales directly as the size. As an example, we examine the TRIAD method to map a complete interaction graph to quasi-planar hardware proposed in __this paper__. The qubit estimate based on this proposal, takes another parameter which is the number of qubits each can interact with, denoted as d, in this example we consider as examples d=6 and d=20 (note that this estimate is not necessarily accurate when `$n\approx d$`

):

## Interaction graphs

A graph (in the formal computer science sense) consists of a set of nodes connected by edges, the nodes can be visualized as circles, and the edges can be visualized as lines connecting the circles. For analog hardware like ours, we can think of an interactivity graph, which is the graph of what can directly interact with what, as a graph with edges joining all of the nodes, in other words any qubit or qudit can be made to interact with any other one. A kind of graph known as a **complete graph**. This seems trivial, but not all devices have full connectivity, and in many cases the way a device is physically built can limit this graph to have specific structures.

As a concrete example, in superconducting circuits, each variable often corresponds to a physical circuit element, the physics constrains the size of these and they have to be placed somewhere. Likewise the coupler circuits can only be so long. This means that the interaction graph for such devices cannot be fully connected, it must instead be a type of graph we call **quasi-planar**. In a quasi-planar graph nodes (representing the qubits), can only have edges (be allowed to interact) with others which are nearby in a 2D plane.

## Interaction graphs of optimization problems

We can now think from the other direction, how much interactivity is needed to solve a given problem. This isn’t something which people often consider in conventional optimization, but it is important when mapping to devices with limited connectivity. Fortunately it is straightforward to figure out from a QUBO (or QUDO, our discrete extension of a QUBO) representation of a problem. We can assign a node to each variable, and then assign edges between them whenever there is a non-zero off-diagonal element in the QUBO matrix. A simple example here is the interaction graph of a simple maximum independent set problem expressed as a QUBO, the QUBO appears on the left, and the maximum independent set problem on the right, with the elements corresponding to different edges color coded. The solution is also color coded in both, with the nodes not included coloured in light blue, and corresponding light blue columns in the matrix. In both case we can see that the remaining nodes/variables are independent, on the graph we can see that no grey nodes share an edge, and in the matrix we can see that the light blue rectangle covers all off-diagonal elements.

As an example of how problem connectivity can scale, we can consider a traveling salesperson problem. In the conventional formulation each variable represents a particular city being visited at a particular point in the tour. Constraints need to be added that each city is visited once, (thus interacting all variables which represent the same city), furthermore only one city can be visited at a time (thus interacting all variables which represent the same position in the tour). Finally penalties need to be added for the distances between cities, these add edges between a variable representing one position with those representing the next and previous position in the tour. While not quite fully connected, it is clear that the interaction graph representing a traveling salesperson problem is highly connected.

## Mapping to a less connected graph

I have so far claimed there are methods to map a more connected graph to a less connected graph but haven’t discussed what they are. Since our hardware has full connectivity and therefore these techniques are never needed on our systems we will not spend much time on it, but it is conceptually useful to have a picture of how this is done. The most used method is one called __minor embedding__, where connected groups of qubits (“graph minors” in technical terms) are strongly coupled together so they act as a single variable. A visualization of this process is shown below. The right graphic shows the embedding thick lines represent strong coupling between the qubits, while the thin lines show couplings which could be used to create interactions between the variables.

Technically minor embedding isn’t the only way to map a more connected graph to a less connected one (__this paper for example__), but it is the most used and the conceptually easiest to grasp. However, as I will show later, there are fundamental mathematical limitations on how efficiently a problem with a highly connected interactivity graph can be mapped to any hardware with a quasi-planar interactivity graph.

## Graph theory: treewidth and tree decompositions

Fortunately, rather than try to argue about every possible method which could be used to map to hardware, we can approach the problem at a higher level. __Graph theory__, which (unsurprisingly) is the theory which allows us to understand graphs (recall that we are using a formal computer science definition of a “graph”, edges connected by nodes, not a plot of a function which is often called a “graph” in high-school mathematics) at a more abstract level, gives us tools to understand the limits of how well any mapping technique could perform. In particular, graph theory gives us tools to measure properties related to graph connectivity which allow us to reason about what is or isn’t possible. The important property here is one called **treewidth**, which relates to a mathematical technique called **tree decomposition**.

A tree decomposition is a way of mapping any graph (which may in general have loops) to a kind of graph known as a “tree”, which has no loops. This decomposition consists of combining nodes of the original graphs into a new graph which must follow the following rules:

Any nodes connected by edges in the original graph must appear in the same node at least once in the new graph

All nodes which contain the same node of the original graph in the new graph must be connected by edges (although these connections can be indirect, does not need to be fully connected)

The new graph must be a tree, it cannot have any loops

The goal is to minimize the number of nodes of the original graph which appear in the “largest” node of the new graph

From this decomposition, we can define a property known as the treewidth, this property is defined as the number of variables in the largest node of the tree decomposition minus one. The convention of defining it as one less is to guarantee that a graph without loops (a tree) will have a treewidth of one.

The key property of treewidth is that it is not possible to map a graph with a given treewidth into a graph with a smaller treewidth, this is hopefully somewhat intuitive. Tree decompositions are also important in algorithms, and in fact __certain dynamical programming algorithms__ can solve NP-hard problems on graphs with low treewidth efficiently, even if the problem involves a large number of variables. This implies that if a mapping were possible from high to low treewidth graphs, it would show P=NP. In general finding the most optimal tree decomposition is a computationally hard problem, but in some cases it is possible to make simple arguments about the treewidth.

## Treewidth of different graphs

From the definition of treewidth, the most optimal tree decomposition for a fully connected graph is to place every node of the original graph into the same node of the new graph, there is no other way to do it, this makes the treewidth of a fully connected graph of size n equal to n-1. I won’t prove it here, but it should also be intuitively clear that a highly connected graph should also have a treewidth close to the number of nodes.

For quasi-planar graphs, the situation is different, a node can only share an edge with nodes within a certain distance of them. We could therefore imagine a strategy of tree decomposition where we divide the graph into overlapping rows which each have a size of twice the interaction range, and spaced by the interaction range. Each of these rows will overlap only with the row above it and the row below it so therefore the graph resulting from this decomposition will have no loops. For an n-by-n quasi-planar graph (where the interaction range is much smaller than the graph size), the number of nodes in each row is proportional to n but the total number of nodes is proportional to n^{2}, therefore the size of highly connected graphs which can be mapped scales as the square root of the size of the hardware interactivity graph if it is quasi-planar. Below we show a visualization of a decomposition process on a quasi-planar graph.

In this example, the graph has n=25 nodes, but we have shown the treewidth cannot be more than 9. This treewidth implies that it is impossible to map a fully connected problem with more than 10 nodes to it by any method. Note that a more optimal (but more complicated to draw) decomposition is possible, but this simple one demonstrates the necessary scaling. The treewidth of this graph is actually only 6. Can you figure out the corresponding decomposition with the largest grouping containing only 7?