Complexity theory, P and NP

Introduction

If you want to make an easy million dollars, all you have to do is prove that PP either does or does not equal NPNP. Sounds easy right? It turns out that this is one of the biggest unsolved problems in computer science. Further, it has some deep important implications on how computers are built, used, and how they implement cybersecurity measures. The PP vs NPNP problem is one of the main drivers of quantum computing, and a basic understanding of it will give you a better understanding of why many unconventional computing approaches are so valuable in today’s world.

First informally

Informally the conjecture PNPP\neq NP is the statement that hard optimization problems exist. In other words, there are optimization problems in existence that cannot be solved (to optimality) quickly, no matter how clever an algorithm is. While this statement may be intuitively clear, it is not mathematically rigorous.  For example, how do we define “quickly”? What kind of problems are allowed in this consideration? How can such a broad statement even make sense? There are lots of subtleties here, and they are often important for understanding what we know theoretically about hard optimization problems, so it is worth discussing the full, in-depth version of this theory. It is as important to understand what complexity theory tells us, but it is also very important to understand what it doesn’t tell us. To appreciate these aspects we need to explain the theory in detail. 

Even informally, it is easy to see why this is a conjecture rather than a proven statement, any proof that such problems exist (so PNPP\neq NP) would have to consider every possible method to solve them, and it is difficult to see how one could even reason about this to prove that none exist. On the other hand if P=NPP=NP it is easier to consider the concept of a proof which describes a class of algorithms that perform well on hard problems. In fact, a key result is that the algorithm would only be required to do well on one of a set of hard problems. Hard problems (those in NPNP) can be mapped into each other, a demonstration that an efficient algorithm exists for one problem in the hardest class of problems is also a demonstration that none of them are hard (in a particular formal sense we discuss later). The fact that if it were true, there would be a relatively straightforward method to prove that P=NPP=NP while there would not be in the case where PNPP\neq NP along with the enormous amount of work that has gone into the study of optimization algorithms has led to an informal consensus among experts that PNPP\neq NP . In other words, truly hard optimization problems exist.

What the letters P and NP mean

In this case, PP stands for “polynomial time” and NPNP for “non-deterministic polynomial time”. The term polynomial relates to the notion of what is considered efficient. Note that a frequent error is to state that NPNP stands for “non-polynomial”, intuitively this seems to make some sense. If “polynomial” is shorthand for “fast”, then is it fast or is it not? As we will see, many of the details of PP and NPNP (and theoretical computer science in general) are not very intuitive, but these concepts are important and often misused, so it is worth having a long tutorial explaining the underlying concepts.

P

 A problem being “in” PP means that the time to solve a problem (using some algorithm which can be run on a classical i.e. non-quantum computer) with nn variables only grows polynomially with nn so something like nkn^k, rather than say knk^n (where kk is some number). In other words for problems in PP, if we pick a fixed finite (but maybe large) value kk and are given a time CknC\,k^n (where CC is some constant which could also be large) we will always be able to solve the problem in the allotted time, no matter how big nn is. This won’t be true for problems where the runtime grows exponentially. For example, if the time to solve a problem grows as 2n2^n, then it is always possible to find a big enough nn such that 2n>Cnk2^n>C\,n^k for any finite CC and kk (this can be demonstrated from the Taylor expansion of 2n2^n, for any kk higher order terms will exist in the expansion so the exponential will eventually win). Growth can be slower than exponential but faster than polynomial, for example, we can always find a large enough nn such that 2n>Cnk2^{\sqrt{n}} >C\, n^k but 2n2^{\sqrt{n}} grows slower than exponentially. Such an algorithm therefore cannot solve a problem in polynomial time, but an algorithm that takes a time which scales as say 2n3+3n22\,n^3+3\,n^2 would.

complexity

NP

NPNP is a similar concept, but gives us an additional resource, a problem is “in” NPNP if it can be solved in polynomial time if you get maximally lucky guessing solutions. In this case “non-deterministic” refers to guessing, but considers the case where you make the best possible guesses. The class NPNP also comes with a caveat, we have to be able to efficiently (polynomially increasing time with the problem size) verify the quality of our answer from guessing in a sense which we will explain later. Note that because PP versus NPNP deals with whether or not problems can be solved in polynomial time, NPNP is often incorrectly quoted as standing for “non-polynomial”. It is intuitive that a non-deterministic (in the maximally lucky sense) algorithm can solve a combinatorial optimization problem in polynomial time. For example in a traveling salesperson problem, you just guess the order of the cities, and there is a non-zero chance that this guess is right, it only takes recording the n1n-1 guesses for the order to solve the problem with nn cities. The length of this route could also be quickly verified, with a simple and fast calculation. What this is really saying is that if P=NPP=NP then any optimization problem that could be solved “quickly” by getting maximally lucky could also be solved quickly without this benefit. The suspected case of PNPP\neq NP on the other hand indicates that at least some optimization problems will not be fast to solve by any method.

Perhaps counterintuitively, there are problems where maximum luck will not yield a verifiable solution. For now verifiable just means there is a way to check the quality of the solution, this statement is made precise in the next section. All optimization problems with efficiently calculable objective functions are verifiable, but other kinds of problems aren’t. Some examples here are counting problems. For example, the #SAT problem (“sharp-SAT” in spoken English), counts how many binary strings satisfy a given set of conditions. Getting maximally lucky means you can find a new string satisfying the conditions very quickly, but they still need to be counted one by one. If the number of strings satisfying the condition grows exponentially with the length of the strings, so will the time it takes to enumerate them, even with maximally lucky guesses. Hypothetically, if you were maximally lucky you could just guess the number of strings, but the fact that this is indeed the number of satisfying bitstrings could not be verified. In fact, even if all satisfying strings were somehow given, the fact that they indeed satisfy the expression could not be verified efficiently since there are exponentially many to check. 

Optimization versus decision problems

The problems that complexity theory usually deals with are decision problems, with yes or no answers. This technicality comes from the verifiability of the answer. Optimization problems can be translated into decision problems. For example, a decision version of a traveling salesperson problem would be to decide if a route shorter than X distance was possible, finding such a route would definitively answer this question as “yes” (the “no” answer does not need to be verifiable). For such problems, the verification step consists of simply calculating the objective function and performing a comparison. However, if we instead converted a #SAT problem to a decision problem (ie. are there more than <exponentially large number> of satisfying assignments?), no such efficient verification would be possible, #SAT problems are therefore said to not be “in” NPNP

 A particularly interesting set of NPNP problems is known as NPNP-complete problems. These are decision problems that can be efficiently solved by maximally lucky guessing and efficiently verified. However, also any other problem that can be efficiently solved by maximally lucky guessing can be mapped into them in a time that scales only polynomially with the size of the problem.

These mappings are known as reductions, and by “efficient” we mean that the reduction can be done in a time that only grows polynomially with the size of the problem. It is important to note that this is a very formal notion of “efficient”. In practice, the polynomial factors coming from these mappings could be very detrimental to the performance of an algorithm. This is in fact a general weakness of complexity theory, it tells us about efficiency only in a very abstract sense. We do not have space here to explain reductions, but if you want to understand how they work, an intuitive (and fun) way to see this is to read this paper that performs reductions to show that many classic video games are formally NPNP-hard. It is worth emphasizing that there are problems that appear to be hard but are not NPNP-complete. If PNPP\neq NP, NPNP-completeness implies a problem will be hard for any algorithm, but the converse is not true, hardness (even hardness for all algorithms) does not imply NPNP-completeness.

venn

 A subtlety here is that we have not yet shown that all NPNP problems (including the decision version of all optimization, hard or not) can be reduced to NPNP-complete problems. This fact can be proven by showing that programs on a universal computer can be reduced to a certain class of satisfiability problems, (which are now known to be NPNP-complete). Because of this fact, a program to check an answer to any decision problem could be compiled to a universal computer (historically a Turing machine was used in proofs) and therefore whether an output of “true” is attainable can be written as a satisfiability problem (or any other NPNP-complete problem due to reductions). In other words, if we find an expression that represents the computer and set the output to “true”, we see if it is possible to have a set of inputs that don’t violate any of the conditions which define the program, such inputs would provide a verifiable solution to the decision problem if they could be found.

NP-Hard

Essentially all of the results from the decision versions of the optimization problems carry over to optimizing, although the language becomes somewhat tricky. NPNP-hard problems are problems that are at least as hard as NPNP-complete problems (in the sense that NPNP-complete problems can be efficiently reduced to them) but don’t have the restriction of having to be decision problems. There is also no requirement for NPNP-hard solutions to be verifiable in any way. Since NPNP-hard problems are defined in terms of being at least as hard as another set of problems but don’t have anything limiting how hard they can be, they formally include some problems which are proven to be undecidable, such as the “halting problem”. The set of hard optimization we are interested in are formally NP-hard optimization problems, optimization problems which are at least as hard to solve as NPNP-complete decision problems (recall, problems can be hard without being NPNP-hard if PNPP\neq NP). Problems like the halting problem are excluded from this distinction by not being optimization problems. In practice, shorthand terms that are technically incorrect but generally clear from context are often used when discussing hard optimization problems. For example, referring to NPNP-hard optimization problems as NPNP-hard problems, and leaving the restriction to optimization implied, or referring to these optimization problems as NPNP-complete, when formally that distinction is only valid for decision problems. This is one example of a mismatch between theory and practice, the set of problems that are interesting in practice are cumbersome to define within a rigorous theoretical framework. The tension between what rigorous logic can be applied to and what is interesting in practice leads to several caveats and limitations which we discuss in the next section.

Caveats and Limitations

While complexity theory is very powerful, and the notion of NPNP-hardness/completeness is a triumph of theoretical computer science, great care must be taken in applying these results to the real world. Since complexity theory is built on rigorous mathematics and defines things in somewhat formal and abstract terms, these may not correspond to “common sense” definitions of what concepts such as hardness might mean. 

It is also important to realize that the way reductions work implies that NPNP-hardness/completeness is a worst-case statement, so not all instances of an NPNP-hard or complete problem will actually be hard. In fact, for some problems, typical cases are not hard. We have hinted at one example here already, video games such as Super Mario are formally NPNP-hard but typical gameplay does not require advanced optimization algorithms, this is because building a level that is fun to play looks very different from one that maps a hard problem. There are other examples within more typical computer science problems such as certain kinds of knapsack problems and satisfiability problems. In fact, generating hard versions of these problems is an area of active research. The lack of a hardness guarantee for random instances of NPNP-hard optimization problems has even impacted the benchmarking of quantum computing. Early attempts to benchmark D-Wave quantum annealers were based on randomly assigned couplings within the restricted hardware graph. However, work done within the spin glass community showed that such problems were relatively easy for the Monte Carlo family of algorithms

Examples

Here is a non-exhaustive list of caveats to keep in mind when dealing with NPNP-hard optimization problems:

  • In practice, even algorithms that scale polynomially can become infeasible to use at certain sizes so assuming that polynomial scaling implies that an algorithm will run in a practical amount of time is dangerous. In practice, a high-order polynomial like n50n^{50} may as well be exponential for large nn

  • Conversely, an algorithm with higher than polynomial scaling may work well up to a certain scale since the theory tells us nothing about the prefactors on the scaling terms or at what size the scaling approximation becomes valid.

  • Reducing the runtime of a problem in a way where the runtime is still super-polynomial.  For example, from 2n2^n to 2n22^\frac{n}{2} can actually make a huge difference in practice. This is all any commercial optimization solver is able to do on (hard instances of) NPNP-hard optimization, otherwise they would show that P=NPP=NP.

  • Scaling only tells us how an algorithm performs on very large problems, it is sometimes better in practice to use an algorithm that is not the one with the optimal scaling, or even where the scaling is unknown.

  • Scaling can be empirically observed, but there is no guaranteed way to tell if one is in the large-size limit.

  • NPNP-hardness is a worst-case statement, some instances of NPNP-hard problems may be easy to solve for some algorithms. In some cases, even typical randomly generated instances may be easy to solve.

  • Conversely, nothing in the theory says that all hard optimization problems must be NPNP-hard. Assuming PNPP\neq NP, showing that a problem is not NPNP-hard does not imply an algorithm with polynomial scaling (however if P=NPP=NP all optimization problems would have an algorithm with polynomial scaling)

  • The notion of efficient reductions only refers to the mapping requiring polynomial time, these polynomial factors could have a big impact on performance so in practice the runtimes may be very different when reductions are performed. The reduction techniques used in computer science proofs tend to be designed to be general rather than efficient and are often not very useful in practice.

  • Solving in the context of NPNP-hardness implies finding the single most optimal solution. There is complexity theory relating to approximate solutions but we do not discuss it here. In practice, finding a good-quality approximate solution is often the goal.

  • Although not formally proven, it is widely believed among experts that PNPP\neq NP. This belief is based on decades of research not finding an efficient algorithm for any of these problems. Proving PNPP\neq NP would be exceedingly difficult, while a single efficient algorithm for any NPNP-hard problem would show P=NPP=NP. There is an unclaimed one-million-dollar prize for a solution to this question, so plenty of motivation to work on it.

Experts widely agree that quantum algorithms will probably not be able to solve NPNP-hard optimization in polynomial time. Formally the definition of PP does not include quantum resources so this question is different from if P=NPP=NP, but rather determining the relationship between NPNP and BQPBQP, the class of problems efficiently solvable by quantum computers (ie. whether or not NPBQPNP\subseteq BQP). This belief is based on the advantage seen in simplified database search problems, which suggest (but don’t formally prove) that making an algorithm quantum can make an optimization problem solvable in the square root of the runtime of the classical equivalent, but not exponentially faster. For a concrete example see quantum branch-and-bound, note previous discussion, this is still a huge advantage.