If a million computer scientists had dinner together, they would spend a lot of money. If one of them wanted to check if the bill was correct, they would have to go through the bill and add everything up, one line at a time, to make sure that the sum was equal.

Six computer scientists proved in two papers in 1992 that a radical shortcut is possible. There is always a way to change a bill so that it can be checked with a few questions. This is true for any computation, or even any mathematical proof, since both come with their own receipt, so to speak, a record of steps that a computer or a mathematician must take.

The acronym was apparently chosen after police mistakenly went to the hotel room of one of the inventors, Shmuel Safra.

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.

PCPs are some of the most important tools in computer science. They have found their way into practical applications, such as in cryptocurrencies, where they are used for rolling up large batches of transactions into a smaller form that is easier to verify.

I don't know of any examples of deep algebraic tools actually making it into practice.

Prior to the creation of PCPs, computer scientists had identified a whole class of problems with solutions similar to dinner checks. Finding a solution in the first place seemed to be impractical for many of these problems.

The class of hard-to-solve but efficient-to-verify NP was named by computer scientists. It provides a conceptual home for many of the practical problems we care about and also for more abstract problems, like finding proof of mathematical theorems. Just as an itemized bill provides proof of the total owed, proofs are step-by-step recipes that establish their mathematical conclusion with absolute certainty. It will be easy to get a proof once you have one. The category of NP is called proofs.

Nobody could dare to make a statement like that before the PCP theorem was proved.

The University of Toronto has a party.

Computers scientists began to rethink what proof could be in the 1980s. Cryptographers wondered if it was possible to know a proof was true without seeing it. Averifier andprover are used to check a proof, which is provided by a prover.

It was possible for the prover to use a new form of proof, such that it could be verified with a constant number of queries, regardless of the proof's original length. The number of queries was brought down to two or three. By making more queries, the verifier can increase its certainty. The result was contrary to intuition. You would have to examine more evidence if you were to continue with the proof. Not true.

It was impossible for anyone to say that such a thing was true.

Some of NP's intriguing properties were explained by the theorem. For some NP problems, the answers seemed hard to compute and hard to approximate. It was the PCP theorem that helped explain it. It said that if a solution to an NP problem was found, it could always be reformatted in a way where most checks from a verifier would pass. The problem would look like it was solved to 90 percent accuracy from the vantage point of the verifier. Because NP problems are hard to solve, it is difficult to find a solution that is close to a certain point.

Scientists began to think about practical applications for PCPs. The PCPs of the 90s were inefficient. It would take thousands of years for a prover to produce a PCP. A hard drive the size of a planet is required for any actual PCP.

By 2008, researchers found that the size and computation of the PCPs had scaled up more slowly, making them more manageable. In the mid-2010s, researchers began to build new forms of PCPs that were even smaller, called interactive oracle proofs, which added additional rounds of interaction between the prover and verifier, where in each round, the prover could produce something much smaller.

Eli Ben-Sasson, a computer scientist who left the Technion in Haifa, Israel, to start the company, said that by adding interaction and using a lot of the same math that was used in PCP technology, you get practical stuff.

In the past decade, computer scientists have found direct applications for PCPs in the software behind cryptocurrencies, which are now raising intriguing theoretical questions of their own. In a software system, a large computer is used to verify financial transactions and a smaller computer is used to verify the transactions more quickly.

Suppose the prover is trying to cheat by concealing a set of false transactions. Researchers say that soundness is important in theory as well as in practical applications.

A paper released in May 2020 by Ben-Sasson, Dan Carmon, Yuval Ishai, Kopparty and Shubhangi Saraf showed that the soundness for one modern PCP system reaches a fundamental limit from theoretical computer science. This is the maximum amount of data that can be kept for a long period of time, when it has been encapsulated in a Reed-Solomon form.

It can still be made more efficient. Two groups of researchers have discovered an optimal method for checking a large block of data to see if it has been corrupted. This method provides a test of integrity that depends on a few queries, while also reaching perfect efficiency in terms of speed and size. It's a proof of concept that one day perfect PCPs might be found.

It is not an easy problem and it feels like we should just do it.