A Major Advance in Computing Solves a Complex Math Problem 1 Million Times Faster

Reservoir computing is an already powerful and advanced type of artificial intelligence. Now, a new study shows how to make it a million times faster for certain tasks.
This is a great development in tackling some of the most difficult computational problems, such as predicting how the weather will turn or modeling the flow fluids through a space.

These are the problems this resource-intensive computing was created to solve. Now, the latest innovations will make it even more practical. This new study is being called the next generation reservoir computing by the team that led it.

Daniel Gauthier, a physicist at The Ohio State University, says that "we can perform very complicated information processing tasks in fractions of the time using much fewer computer resources than what reservoir computing can currently accomplish."

"Reserve computing was already a significant step forward from what was possible previously."

Reservoir computing is a form of neural networks, machine learning systems that are based on how living brains work. They can spot patterns in large amounts of data. A neural network can recognize a dog by looking at 1000 pictures.

Reservoir computing provides extra power, but the details are very technical. The process involves sending information into a "reservoir", where data points are linked in different ways. The information is then sent from the reservoir to be analyzed and returned to the learning process.

This makes it possible to learn sequences faster and easier. It also relies on random processing which means that the inside of the reservoir may not be clear. It's an engineering term that refers to a "black box" because it works but no one knows why.

The new research has shown that reservoir computers can be made to work more efficiently by eliminating randomization. It was determined by mathematical analysis which parts of a reservoir computers are crucial for it to work. Removing redundant bits can speed up the processing process.

One result is that a shorter 'warm-up' period is required. The neural network receives training data to help it perform the task it is supposed to. This is where the research team made major improvements.

Gauthier says that there is virtually no heating time required for next-generation reservoir computing.

To heat it up, scientists need to input 1,000 to 10,000 data points. That's all the data that is lost. It is not necessary for the actual work. Only one, two, or three data points are required.

A particularly challenging forecasting task was completed on a standard desktop computer that used the new system. It took less than one second. The same task is much more difficult with current reservoir computing technology than it is on a supercomputer.

The new system was between 33-163 times faster, depending on the data. The updated model was 1,000,000 times faster when the task objective was changed to prioritize accuracy.

This is only the beginning of this super-efficient neural network. Researchers behind it hope to test it against other more difficult tasks in the future.

Gauthier says, "What's interesting is that this new generation of reservoir computing takes something already very good and makes them significantly more efficient."

Nature Communications published the research.