The findings did not require any new frameworks or tools. The authors figured out how to circumvent a roadblock that had arisen in decades-old work by Wigderson in collaboration with Noam Nisan of the Hebrew University of Jerusalem.
There was a silly way of getting around it, and that's what we realized.
After computers had been around for a while, a troubling trend emerged. Sometimes scientists had to wait too long for their computers to solve the problems they were trying to solve.
They suspected that some problems were hard regardless of the scale. A graph is a collection of points with edges connecting them. The Hamiltonian path is a path that travels along the edges to visit each point. If you increase the number of points, it will take longer to determine if there is a path. The impracticality of the best algorithms is due to the fact that they took longer with increasing scale.
Many other problems took a long time as well. Computer scientists would show that if an efficient solution could be found to one of the hard problems, it could be used to solve other hard problems. This is a category of NP problems. The problems in NP come up in many contexts when Wigderson refers to "all the problems we hope to solve."
There were many problems that did not seem to be hard, and that did not take a long time to solve. Computer scientists thought that NP problems were harder than P problems and that they could never be solved efficiently. There was always a chance that they were wrong.
The question of whether NP problems were harder was asked by computer scientists. It would suffice to prove that a hard problem needed exponential time, but they had no idea how to do that. They sought out other avenues that might be easier to pin down.
There was one set of problems that seemed right. The set only depended on addition and multiplication. If any Hamiltonian paths exist, it is possible to count them all by adding and subtracting data from the points. If other operations are allowed, the same count can also be performed.
This set of simpler problems mirrored the larger landscape of more complicated tasks. As the scale of the problem increased, the time it took to count Hamiltonian paths seemed to increase. In 1979 a mathematician from Harvard University showed that many problems were equivalent in difficulty and lack of difficulty. Computer scientists named the categories after him.
The VNP problems could not be proved to be very hard. The P versus NP problem was rediscovered by computer scientists, but only now they had a key advantage. VNP problems are more difficult than NP problems because they build on each other. If you want to prove something hard, you need as much hard as possible.
Shpilka said that it is harder than NP.
In the ensuing decades, computer scientists made more progress on the VP versus VNP question than they did on the P versus NP question. They had trouble telling if there were any hard math problems until the recent work by the three men.
It is helpful to learn how computer scientists think about multiplication and addition problems. These problems are captured by expressions called polynomials, which consist of variables that are added and multiplied together.
You can use a polynomial to represent the problem of counting Hamiltonian paths. If you add more points and edges, you also add more variables to the polynomial.
If you want to prove that counting Hamiltonian paths is hard, you need to show that the corresponding polynomial takes more operations to solve when you add more points and edges. One operation is required for x 2, whereas x 2 + y takes two operations. The number of operations is called a size.
It is difficult to know the size of a polynomial. Take the number 2 and 2x. It appears to have a size of 4. The product of two sums, (x + 1)(x + 1), which has fewer operations, can be rewritten as the product of a single multiplication. As a problem scales and more variables are added to it, you can keep it small.
A few years after Valiant's work, computer scientists found a way to make the size of a problem easier to analyze. They came up with a property called depth, which is the number of times the polynomial switches are used between sums and products. x 2 and 2x are a sum of products, so the polynomial x 2 + 2x + 1 has a depth of two. The expression (x + 1)(x + 1) has a depth of three, because it is technically the same as 0 + (x + 1)(x + 1), so it is a sum of products.
To simplify polynomials, computer scientists restricted them to a form in which they had a property called constant depth, where the pattern of sums and products doesn't change as the problem grows. Their size is static since the size of a polynomial decreases as its depth increases. A formula is a representation of a certain depth.
The computer scientists were able to make more progress by studying graphs and polynomials of constant depth. They uncovered a sequence of findings that culminated in the new work.
The 1996 paper by Nisan and Wigderson was the first step towards the new result. The pair focused on a problem that involved multiplication of tables of numbers. The problem was simplified in two ways. They represented it with formulas of a constant depth. They only considered formulas with a certain simple structure, where each variable has a maximum exponent of 1 Computer scientists already knew that certain problems could be converted to this relatively simple set-multi linear structure, at the cost of a sub-exponential increase in their size.
As the matrices scaled up, Nisan and Wigderson showed how long it would take to solve the matrix multiplication problem. They proved that an important problem was hard, a victory in the broader effort of proving hard. The result was limited because it only applied to formulas with a simplistic, set-multi linear structure.
If you were working outside of complexity, it might not have meant much to you.
The properties of depth and structure were better understood by computer scientists over the next 20 years. There is a tug of war between depth and structure.
The balancing act between these two properties was made precise by computer scientists. The gain in size from the set-multilinear structure was balanced out by adding two levels of depth. If a structured formula of depth five took exponential time, so would a depth-three formula of a general nature.
The authors of the new work showed that the matrix multiplication problem can grow at a rate comparable to exponential. General depth-three formulas also take time. The balancing act held for all depths, not just three and five. They proved that the size of a general formula of any depth grows at exponential rates as the problem grows.
They proved that matrix multiplication is hard if it is represented by a formula of a constant depth. We didn't know if the formulas were easy or hard, and we didn't know if they were easy or hard.
The result provides the first general understanding of when an arithmetic problem is hard, when it is restricted to being represented by formulas of constant depth. The problem of matrix multiplication was already known to be a VP problem. VP problems are relatively easy when not restricted to a constant depth, so the result is that the constant-depth restriction is the source of hardness.
Shpilka said that the model is so restricted that even things that should be easy in an unrestricted world become hard.
The ultimate question in the field of algebraic complexity is whether VNP problems are hard compared to VP problems. The new result only shows that constant-depth formulas are hard. Researchers are trying to build on the result to reach an answer.
That could still be a long shot. Saraf said it is likely, but it is still a big milestone on the way to showing VP is not equal to VNP.
For the greater P versus NP question, we can now be a little more hopeful about finding an answer. In order to solve the problems we hope to solve, we need to know which ones are not.