# Integer Optimization

## Integer problems tend to be harder

One key aspect of optimization is that problems can often become much harder when restricted to the domain of integers, rather than allowing continuous variables. For example, take the task of finding a value of three variables $X>0$, $Y>0$, and $Z>0$ such that

$X^3+Y^3=Z^3$.

If we allow these three variables to take any value, then this problem is somewhat trivial. We can choose an arbitrary value of $X$ and $Y$, cube them, sum the cubes, and take the cube-root to obtain $Z$. But what happens if we place a restriction on all three variables that they must be integers? In this case, the previous method does not work. We can of course select integer $X$ and $Y$, but there is no guarantee that $Z$ will be an integer for any given choice. This trivial problem has been converted to a much harder problem. In fact, using advanced mathematics, it can be proven that there is no solution. Another example comes in factoring numbers. Given a number $Z$, finding

is again trivial without restrictions. One can simply choose a value for $X$ and then divide to find $Y=Z/X$. However, this problem again becomes hard if we require that $X$ and $Y$ be integers. In fact, this kind of problem being hard is the basis of much of __modern cryptography__. While neither of the problems discussed so far are optimization problems (although they could be converted to optimization problems), they do make an important point: integer problems tend to be hard, and therefore are a natural paradigm for QCi’s solvers.

## Integer Optimization

What about optimization problems? Let’s consider a simple case, linear optimization. In linear optimization, the goal is to minimize or maximize the value of a linear function

Where the coefficients can be positive or negative and the variables are restricted to non-negative values ($X_1\ge 0$, $X_2\ge 0$, etc…). Without additional constraints, these problems are either trivial or unbounded. If one of the coefficients is negative, then the value can always be made smaller by increasing the corresponding variable. Likewise, if all are positive, then the trivial answer is to set them all to zero. To make this non-trivial, we need to add constraints. A simple but mathematically powerful form of constraints are linear constraints, which take a very similar form to the objective function

$A_{i,1}X_1+A_{i,2}X_2+....\le b_i$.

The index i is included, so there can be multiple such constraints. Once these constraints are included, we can define problems that are both bounded and non-trivial. Firstly, let us consider the case where $X$ are allowed to take a continuum of values. Intuitively, one could imagine solving these problems by defining a function which penalizes proximity to violating a constraint __“barrier” functions__, and then “moving downhill” within this space while weakening the functions so they only penalize being very close. While there are many details, it can be proven that these methods, known as “__interior point methods__” can solve this kind of problem efficiently in the sense of polynomial time as described in __this lesson__.

Based on the previous discussion, you probably have guessed that these problems become hard when restricted to integer values. In fact, they become NP-hard (a review of what this means can be found __here__). This is a famous model known as integer linear programming, and is frequently the form __used to express optimization problems in operations research and related topics__.

## Why is integer hard?

It may at first seem counterintuitive that a problem that is relatively easy to solve can become hard by restricting to integer values. We would usually expect the best integer solution to be “near” the best continuous solution, so it seems intuitively like we could just solve the discrete problem by first solving the “easy” continuous one and then finding a nearby integer point. This step may seem like it should be trivial, but is actually rather difficult because the number of points to check grows exponentially. Solving a problem with $n$ variables can be thought of as searching an $n$ dimensional space, with the integer solutions being points lying on a regular (hyper-) grid.

Visualizing higher than three (or maybe four) dimensions is hard for humans in general, but this point can be visualized by examining the pattern going from one to four dimensions. In one dimension, it is fairly clear that there are two options to round a value. This can be visualized on a line where blue edges represents viable rounding options:

Going to two dimensions, we can imagine a non-integer points would be adjacent to four values. Since each variable can be rounded two ways, this can be visualized as a point in the middle of a square:

In three dimensions, we can now imagine a point located somewhere within a cube, giving 8 possible directions (note that points outside the cube are no longer drawn to make visualization easier):

Finally, if we stretch our imagination a bit, we can just about visualize the four-dimensional version, even though we live in a three-dimensional world. In this case, the point would be in the middle of a four dimensional hypercube (sometimes called a tesseract) with 16 possible directions to round:

The pattern here should be fairly clear, that the number of possibilities doubles each time a variable is added. There will of course be more clever strategies than just checking all nearby integer points (in fact a family of algorithms which use clever rounding are known as LP relaxations and provide __a powerful class of approximation algorithms__). However, none that we know of so far are efficient for finding the most optimal solution. Further, if one were found, this would show that P=NP, which would be a very __surprising result for theoretical computer science__.

## Integer linear programming and Dirac

An astute reader might be wondering why we are discussing linearly constrained programming models when our Dirac hardware is designed to solve quadratic unconstrained problems. The answer to this question is that linear constraints have a very natural quadratic expression, namely the linear constraint

can be enforced by adding a penalty of the form

where $Y_i\ge 0$ is a “slack” variable and $\lambda$ is a large positive constant. As long as the constraint is satisfied, as suitable value of $Y_i$ can be found to make the expression zero. But if it cannot, a large penalty is incurred. Penalty implementations are discussed in __this tutorial__ in a binary context, but the same formulation works in a higher-than-binary setting. Dirac provides a more powerful model than integer linear programming, since nothing prevents the objective function from also containing quadratic terms. It can also implement __even higher order interactions__.

## Binary problems on a higher-than-binary solver

One immediate consequence of the constraint implementation discussed in the previous section is that it is straightforward to upper-bound the values a variable can take by adding a term

$(X_i+Y_i-m)^2$.

We can constrain a variable to take a maximum value of $m$, by adding a single additional variable. However, many practical problems of interest are binary in nature, and in that case we can do even better by avoiding an additional variable altogether. We note that if $X_i\ge 0$, then the expression

will be zero if and only if$X_i=0$ or $X_i=1$, and will take a positive value for other integer points. This can also be accomplished with a quadratic constraint if we restrict to integers. We can therefore constrain a variable to take binary values without adding an additional slack variable. In other words, our integer solver can naturally map binary problems without any overhead. A binary problem with $n$ variables (__a QUBO__ for example) can be mapped to an integer solver with n variables. Effectively, an integer solver can do everything a binary solver can do, plus much more!

## Conclusion

In this lesson, we discussed solving integer problems of the type which our Dirac devices are able to solve. A key feature of these problems is that it tends to be harder to solve problems over an integer domain rather than continuous variables. We in fact gave several examples where a continuum version can be solved efficiently, and sometimes even trivially, but the discrete version is much more difficult (even impossible in extreme cases). We also show that for general integer problems, it is straightforward to restrict to binary using simple constraints and without extra variables. All of these facts suggest that integer solving is an important and powerful setting for our hardware to work in.