# The Many Faces of Quantum Optimization

Quantum computers use alien properties of the quantum mechanical world to run computational tasks like optimization; faster, better and more efficiently than classical computers. The two flagship examples of this *computational supremacy *are prime factoring and unstructured search (search in unstructured large databases).

However, for the next few years (the so-called NISQ years), the number and the quality of qubits are not sufficiently adequate for performing factoring and searching. The attention has shifted towards another direction: optimization, in all its manifestations: combinatorial optimization, continuous and discrete optimization, quantum chemistry, training ML models etc. All of these tasks are believed to be less demanding and require a fewer number of qubits than prime factoring and unstructured search.

In this blog, we cover two different techniques to solve optimization problems on today’s NISQ quantum computers: quantum variational algorithms and quantum annealing. The goal is to convey the surprisingly simple mechanisms behind each one.

## Go With the Flow

The easiest, arguably the most natural, way to solve classical optimizations is using quantum annealing. The figure below summarizes the solution process inside a quantum annealer.

The *quantum evolution* starts from an initial quantum system whose ground state can be realized efficiently (the ground state is trivial). The system is then *slowly* perturbed into the more complex problem we are aiming to solve; going too fast will excite the system and lose the “fragile” ground state (imagine walking on a tight-rope holding a long pole, the balance is the ground state.)

At the end of the evolution, we measure and get the answer!

User intervention is not needed here as the computation is completely driven by the flow of the quantum evolution of the system as above. This makes quantum annealing the most obvious way to solve optimization problems.

## The Power of Data

Solving an optimization problem using today’s gate model quantum computers is completely different, and arguably much more “artificial” and “ad-hoc” than using quantum annealing.

The basic idea is again quite simple: training a *quantum circuit* to map the initial trivial configuration of the qubits to the configuration that represents the solution of our optimization. Since the latter is not known, this is an unsupervised machine learning problem.

To make this more concrete, let us consider the simple problem of finding the value of the binary variable *x *that minimizes the following simple cost function:

*f(x) = -x*

Because this is a very simple example we can easily solve it by inspection: since *x *can only take the two values 0 and 1, the *optimal* solution is simply *x*= 1* for which* f(x*) = -1*

We now want to solve the problem *programmatically *by training a simple circuit*. *We will start from some choice of the initial value of* x, *say* x=0*. Our circuit has one simple gate, a rotation followed by a measurement:

It helps to think about the two possible values of *x *as two orthogonal directions (states) in a two-dimensional plane:

Applying the rotation will set the value of the variable* x *in a *superposition state *(the green arrow), which has non-zero probabilities for being 0 and 1.

Now, because this is a very simple example, we expect the final desired angle to be Θ= π/2, since with this angle we can rotate our initial state* x=0* to the optimal solution *x=1*. To do this programmatically, we use the familiar *gradient descent* which starts with an initial value for the angle and gradually improves it. The loss function *is intentionally *set to the cost of function *f(x) = -x *. This drives the training towards the optimal solution.

After a few iteration, the training converges to Θ= π/2 which yields the optimal value *x=1*

What we described above is similar to what happens inside the Variational Quantum Eigensolver (VQE) and Quantum Approximate Optimization Algorithm (QAOA) with different choices of circuit: in all cases we train unsupervised machine learning models.

There are many open questions here, for instance the choice of the circuit is under active investigation. The same is true for quantifying the quantum advantage of such hybrid algorithms; we will leave all of this for another time.

# Conclusion

Quantum computations come in different flavors. For the next few years, however, only a subset of the potential computations can be performed with meaningful practical outcomes.

Likely, this subset includes solving optimization problems which are ubiquitous in our real-world.