What specific cases do quantum computers perform better than classical ones? Today's quantum computers are prone to errors that can ruin their calculations, making it difficult to answer that question.

They've already done it. The point at which a quantum computer does something beyond the reach of any practical classical algorithm is marked by the announcement of a 53-qubit machine. Physicists at the University of Science and Technology of China demonstrated their skills.

Computer scientists want to know if classical algorithms will be able to keep up with quantum computers as they get bigger and bigger. Scott is a computer scientist at the University of Texas, Austin.

The general question is difficult to answer due to the errors. Future quantum machines will be able to compensate for their flaws using a technique called quantum error correction, but that is still a long way off. Even with uncorrected errors, is it possible to get the hoped for runaway quantum advantage?

Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.

They couldn't prove the answer was no for all cases. A team of computer scientists has taken a major step towards a comprehensive proof that error correction is necessary for a lasting quantum advantage in random circuit sampling. When errors are present, they developed a classical algorithm that can mimic random circuit sampling experiments.

The theoretical result is a beautiful one, but it is not useful for real experiments like the ones conducted by the internet giant.

Researchers start with an array of qubits. They use quantum gates to manipulate these qubits. Pairs of qubits become entangled, meaning they share a quantum state and can't be described separately. The qubits are brought into a more complicated entangled state.

Researchers measure all the qubits in the array in order to learn about the quantum state. Their quantum state collapses to a random string of bits. The number of possible outcomes increases with the number of qubits in the array. Not all strings are equal. A picture of the probability distribution underlying the outcomes can be built from sampling from a random circuit.

Is it difficult to mimic that probability distribution with a classical algorithm that doesn't use anyentanglements?

The answer for error-free quantum circuits was proved by researchers in 2019. Computational complexity theory is used to classify the difficulty of different problems. The number of qubits is not treated as a fixed number. A physicist at the Massachusetts Institute of Technology said that the number is going to increase. You would like to ask if we are doing things where the effort is exponential. When n grows large enough, a program that is exponential in n lags behind any program that is polynomial in n When theorists talk of a problem that is hard for classical computers but easy for quantum computers, they are referring to the fact that a quantum computer can solve the problem in less than a second.

The paper ignored the effects of errors. There is a quantum advantage for random circuit sampling.

If you want to account for errors and increase the number of qubits, you need to decide if you are also going to add more layers of gates. As you increase the number of qubits, you might want to keep the circuit depth constant. Classical simulation will still work even though you won't get much entanglement. The cumulative effects of gate errors will wash out the entanglement if the circuit depth is increased.

There's a Goldilocks zone in between. Even as the number of qubits increased, it was still possible that quantum advantage could live here. Even though the output will get degraded by errors, it might still be difficult to simulation at each step.

This loophole is closed by the new paper. The authors were able to prove that the runtime is a function of the time it takes to run the experiment. There is a connection between the speed of classical and quantum approaches.

There is a small gap where classical simulation methods are unknown because the underlying assumptions break down for certain shallow circuits. Most researchers don't hold out hope that random circuit sampling will be hard to recreate classically. Bill Fefferman is a computer scientist at the University of Chicago and one of the authors of the paper.

The results suggest that random circuit sampling won't give a quantum advantage. It shows the fact that complexity theorists call efficient, but aren't necessarily fast in use. It is too slow to be practical because of the low error rates achieved in quantum supremacy experiments. This result doesn't change anything researchers knew about how hard it is to reproduce random circuit sampling in error-free fashion. The paper is more of a confirmation of random circuit sampling than anything else according to the physicist leading the quantum supremacy research.

All researchers agree that quantum error correction is crucial to the long-term success of quantum computing. The solution is at the end of the day.

Scott is a member of the advisory board.