The Institute for Advanced Study embarked on a high tech project in the early 1950s. At the request of John von Neumann and Herman Goldstine, the physicist Hedvig Selberg programmed the computer to calculate mathematical sums from the 18th century.
The sums were named after Carl Friedrich Gauss. Gauss would sum up the numbers of the form $latex efrac2in2p$. The usefulness of quadratic Gauss sums has been proven since their inception. Jeffrey Hoffstein is a mathematician at Brown University.
In the mid-19th century, the German mathematician was toying with a close relative to the Gauss sums, where the n 2 in the equation is replaced by an n 3. A keen observation that would lead to centuries of inquiry in number theory was made by Kummer.
Abstractions navigates promising ideas in science and mathematics. Journey with us and join the conversation.The values of the sums are hard to say if they are reworked into a formula. Kummer set about calculating and calculating because he lacked a formula. Matthew Young is a mathematician at Texas A&M University. Kummer gave up after plowing through 45 sums.
Kummer noticed an interesting thing. It is thought that the sums could be between 1 and 1 after being normalized. He discovered that they were distributed in different ways. Only a sixth of the results were between 1 and 2 They were clustered around 1.
Most of the Gauss sums would be between 12 and 1; fewer between 12 and 12; and still fewer between 12 and 12.
They were going to test this on their computer. Around 600 sums were calculated for all the non-trivial primes less than 10,000. She would end up with a line of acknowledgment at the end of the paper. The normalized sums became less inclined to cluster as the primes got bigger. The evidence that Kummer was wrong led mathematicians to try to understand the sums in a deeper way.
The process is done. The mathematician couldn't prove his solution to Kummer's mystery. In the fall of last year, two mathematicians from the California Institute of Technology were able to prove Patterson's hypothesis.
As a graduate student at the University of Cambridge, Patterson became enamored with the issue. He wanted to know what happens when numbers are placed anywhere between 1 and 1 The average size of the sum will be positive or negative if you add up the random numbers. You would expect N of them to add up to roughly $latexsqrtN$.
The requirement to stick to the prime numbers was not taken into account by the man. The sum was larger than $latexsqrtN$ but less than N, which implied that the sums behaved like random numbers. If you looked at all of the Gauss sums at the same time, you would see that they were evenly distributed.
Kummer's computations showed a bias and theIAS computations refuting one.
It was exciting to work on but very risky.
The California Institute of Technology has a man named AlexanderDunn.
If you add up the Gauss sums for prime numbers, you should get the same N/6 behavior.
After giving a talk about his work on the Kummer problem, he was contacted by a graduate student who wanted to incorporate techniques from prime number theory. They published an advance on the problem, but they couldn't show that the predicted bias was accurate.
There wasn't much progress over the years. At the turn of the millennium, Heath-Brown made another breakthrough, this time with a tool called the cubic large sieve.
A series of calculations were used to relate the sum of the Gauss sums to a different sum. If you add up the Gauss sums for primes less than N, the result can't be larger than N. He believed that the sieve could be improved. It would lower the bound to N/6 exactly if it could. He sketched out a formula for the sieve in a few words.
Even with the new tool, mathematicians couldn't advance further. The beginning of the end was marked by a lucky meeting between Alexander and Maksym. They were supposed to work on Patterson's conjecture together before he started. Research and teaching continued despite the Covid-19 epidemic. The two mathematicians bumped into each other in a Pasadena parking lot. We agreed that we should start talking math after chatting. They were working on a proof by the end of the month.
It was exciting to work on but very risky. I used to come to my office at 5 a.m. every morning for a long time.
The cubic large sieve was essential for their proof. They realized something wasn't right when they used the formula that Heath-Brown had written down in his paper. After a lot of hard work, we were able to prove that.
The mistake was theirs, according to the man. I was pretty sure that we have an error in our proof. He was persuaded byDunn that he was wrong. The sieve could not be changed.
The rightness of the large sieve made it possible for the two people to change their approach to the question. They succeeded.
The main reason why nobody did this was because theHeath-Brown conjecture was misleading. I think he would figure out how to do it if I told him that it's not true.
The paper was posted on September 15th. The generalized Riemann hypothesis was used to prove their proof. Other mathematicians don't see this as a big deal. We want to get rid of the hypothesis. Heath-Brown is a professor at the University of Oxford.
The work of the two men is more than just a proof of the theory. The paper brought a surprise end to a story he has been a part of for decades. He said that he was happy that he did not write in his paper that he was sure that one could get rid of this. It would be great if one could get rid of this. You should be able to. I was wrong again.