The researchers started looking at how to apply this advance to the maximum flow problem. The idea is to turn up the resistance on the highways that don't have a lot of capacity to discourage electrons from running through them. We can quickly calculate the flow of electricity through these wires, and it will have the rough characteristics of the maximum flow of vehicles through the highways. The electrical flow problem doesn't have hard limits on the capacities of the wires, so they won't be the same.

After producing this initial flow, you adjust the capacities just the same as in the Ford and Fulkersons. To solve this new electrical problem, you need to reset the resistance of the wires.

Unlike the combinatorial approaches, which change one chunk of the network at a time, Spielman and Teng take in the entire sweep of the network at once. You need less total steps to reach the max because each step is more powerful. This approach was used by Samuel Daitch in 2008 to match the previous m 1.5. Mdry pushed the approach further in order to break through the m 1.5 barrier for low- capacity networks. The person said that it was a big surprise.

Researchers got stuck at m 1.33 after extending this approach further. They wondered if this was a limit.

The fastest way to write down a network is with a multiple of m. Linear time is what it's called. It seemed unthinkable to many researchers.

There was no reason for us to do that.

Reduce, Reuse, Recycle

The authors of the new paper felt there was more juice left in the tank. Mdry was able to reduce the number of steps needed to solve a maximum flow problem, but at each step, the network had to be rewritten and its electrical flow solved from scratch.

It didn't matter what the algorithm was doing inside, it was a black box. Six researchers decided to dig into the guts of the program and tailor it to the maximum flow problem. They suspected that these components would allow them to solve the harder problem of finding the cheapest way to route a given amount of material. The maximum flow problem can be solved with any minimum cost formula.

The researchers came up with a function that considers the costs and capacities of the flow. The total energy of the system can be lowered by shifting flow from an expensive link to a cheaper link.

The researchers merged this approach with the older one to figure out how to move a flow quickly. It's possible to reuse some of your work from previous steps if you change only part of the network at a time.

The path that begins and ends at the same point can reduce energy. Imagine that you have created a flow that takes trucks from Chicago to New York along a toll road, but then you discover that there is a cheaper way to get there. Adding the cycle that starts in New York, runs to Chicago along the expensive road and comes back along the cheaper route will remove the expensive path and replace it with the cheaper one.

This approach uses a lot more steps than the electrical approach, so it looks like it's regression. The process of finding a good cycle quickly is what compensates.

The inner workings of Spielman and Teng are located there. Many of the network's most important features can be captured with a low-stretch stretching tree. One good cycle can be built by adding a single link from outside the tree. The number of cycles you need to consider is greatly reduced by having a low stretching tree.

The team didn't have enough money to build a new tree at every step. They had to make sure that the new cycles caused only minor ripples in the trees so that they could reuse most of their computations. One of the paper's authors said that achieving this level of control was the main challenge.

As you look at larger and larger networks, the researchers' algorithm runs in almost linear time. Mdry called it a tour de force.

Many of the ideas from Spielman and Teng are used in the algorithm. If you've only seen horse-drawn carriages, you're in for a rude surprise. It has a place to sit, it has wheels, and it has something that makes it move. The horse has been replaced with a motor.

The analysis of the team is long and complex, but other researchers will soon simplify it. He believes that it will be made cleaner and better over time.

Once the algorithm is streamlined, computer scientists will be able to use it to solve different problems. People will find applications for it that they didn't think of before because we can do it quickly.

Spielman is wondering about the other central problems in the theory of algorithms. Is there anything else we could do?