Mathematicians Clear Hurdle in Quest to Decode Prime Numbers

The distribution of prime numbers was posed by Bernhard Riemann in the 16th century. The mathematicians have made little progress on the hypothesis. They have made progress on simpler problems.

Paul Nelson of the Institute for Advanced Study solved the subconvexity problem in a paper posted in September. The proof is a significant achievement and teases the possibility of more discoveries related to prime numbers in the future.

Nelson said that it was a far-fetched dream, but that he hoped to get some insight into how the Riemann hypothesis works.

Prime numbers are the most fundamental objects in mathematics and the subconvexity problem is important because of it. There is no pattern to how they are distributed when you plot them on the number line. A revolutionary approach that would open the primes' hidden structure was created by the Riemann zeta function, which was invented in 1859.

The result proves that a few years ago it would have been considered science fiction.

Getting complex.

The question hinges on the Riemann Zeta function. The terms add together are the inverse of the whole numbers, in which the denominators are raised to a power defined by a variable.

If mathematicians could prove a basic property of the function, they would be able to estimate how many prime numbers there are at any given interval on the number line.

Prior to Riemann, Leonhard Euler created a new proof that there are infinitely many primes with a similar function. The powers that are real numbers are raised to the denominators in the function. The Riemann Zeta function assigns complex numbers to the variable s, an innovation that brings the whole store of techniques from complex analysis to bear on questions in number theory.

We haven't made any progress on the Riemann hypothesis in 150 years, whereas this is a question we can makeIncremental progress towards.

Institute for Advanced Study is headed by Paul Nelson.

Complex numbers have two parts, one real and one imaginary, the latter of which relates to the imaginary number i. 3 + 4i and 2 6i are examples. The 3 and 2 are the real parts, while the 4i and 6i are the imaginary parts.

The Riemann hypothesis is about the values of s. The only important values of s that do this are complex numbers. The function also equals zero when s is a negative even integer with an imaginary part that equals zero, but those zeros are not considered trivial. The Riemann zeta function explains how primes are distributed on the number line if the hypothesis is true. That is complicated. A video detailing how it works was produced by Quanta.

The Riemann hypothesis has led to many advances in mathematics, though mathematicians have made little progress on the question itself. They have at times diverted their attention to slightly easier questions which are similar to Riemann's intractable riddle.

Next to nothing.

The problem was solved by Paul Nelson. Each step has an explanation.

The Lindelf hypothesis is the first one. The Lindelf hypothesis says that the output of the Riemann Zeta function is small if the real part of s is less than $latex.

The real part of s is fixed at $latex frac12$, but the imaginary part can be any number you like. One way to define small is to compare the number of digits in the input and output.

The output never has more than 25% as many digits as the input. It doesn't grow disproportionately as the input grows. The trivial bound is 25%. The Lindelf hypothesis says that the size of the output is always the same as the input.

The gap between the trivial bound and the conjectured bound has been closed by mathematicians for more than a century. They have made a dozen or so improvements since Jean Bourgain proved that for values of s with real part $latex frac12$, the output of the Riemann zeta function has a size that is 15% the size of. If the input is a million-digit number, the output won't have more than 150,000 digits. It is something, a far cry from proving the Lindelf hypothesis.

Nelson said that there has been no progress on the Riemann hypothesis in 150 years. There is a way to keep score.

The Lindelf hypothesis is an example of a problem that is easy to scorekeeping. Nelson solved another problem that was one step away from the question.

Families of functions.

L-functions are a large class of mathematical objects that are used in many different ways. Calculating other L-functions that provide more refined information about the primes is done by modifying the definition of the Riemann Zeta function. The properties of some L-functions measure how many primes below a certain value have a given number as their last digit.

L-functions are objects of intense study and are central players in a research vision known as the Langlands program. There is no full theory explaining what mathematicians are.

Nelson said that there is a big zoo of these things and that they can't prove anything.

The Lindelf hypothesis predicts that the output stays small relative to the input for all L-functions if the real part of the complex number input is $latex frac12$.

The Lindelf hypothesis has been chipping away at by mathematicians, but they only managed scattered progress on the subconvexity problem. The trivial bound is broken if the output has less than 25% of the number of digits of the input. For a few specific families of L-functions, mathematicians were able to do that, but they were far from achieving a general result.

In the 1990s, mathematicians realized that breaking the trivial bound for general L-functions could lead to advances on a number of problems, including questions in an area of research called arithmetic quantum chaos and a question about which integers can be written as sums of three squares.

According to the Swiss Federal Institute of Technology, people realized in the last 20 to 30 years that there are many problems that can be solved.

Nelson was the mathematician who did it after two decades of work.

A change in perspective.

Two teams of mathematicians, one led by Joseph Bernstein and the other by PhilippeMichel and Akshay Venkatesh, changed how mathematicians estimate L-functions. They created a way to think about the size of their outputs in a different way. The Fields Medal, math's highest honor, was won by Venkatesh.

The size of an L-function is linked to the size of an integral, called a period, that can be calculated by integrating a function called an automorphic form along a geometric space. This gave mathematicians more tools to break the trivial bound.

The Swiss Federal Institute of Technology Lausanne has more techniques to play with.

Nelson and Venkatesh collaborated on a paper in which they determined which automorphic forms were best for making the kinds of size estimates needed to answer the subconvexity problem. Nelson produced two more solo papers on the topic, the first in 2020 and the second in September, that solved it.

Nelson proved that the outputs of the L-function are less than 25% of the size of the inputs. He broke the bound by a hair, but sometimes it takes only one world to cross into another.

We are satisfied with this because he broke trivial bound. The breaking of things is what it is.

The subconvex mathematicians will face other problems, maybe even the Riemann hypothesis one day. That may seem far-fetched right now, but math thrives on hope, and at the very least, Nelson's new proof has provided that.