Scientists develop the next generation of reservoir computing

Credit: Pixabay/CC0 public domain
Scientists are now able to tackle the most complex information processing problems with a new type of computing that is very similar to the brain.

Researchers have now found a way for reservoir computing to work 33-to-a million times faster with significantly fewer computing resources.

Researchers solved a complex computing problem on a desktop computer in less than one second using next-generation reservoir computing.

The same problem can be solved using the most current technology. However, it takes longer to solve, according to Daniel Gauthier (lead author and professor of Physics at The Ohio State University).

Gauthier stated that "we can do very complex information processing tasks within a fraction of time using much less computing resources than what reservoir computing can currently accomplish."

"Reserve computing was already a significant step forward from what was possible previously."

Today, Nature Communications published the study.

Reservoir computing, a machine-learning algorithm that was developed in the early 2000s, solves the most difficult computing problems such as forecasting the evolution and change of dynamical systems, Gauthier stated.

He said that dynamic systems like weather are hard to predict as a small change in one condition could have huge effects on the future.

One example of this is the "butterfly influence," which refers to the metaphorical effect that a butterfly's wings can create changes in the weather for weeks afterwards.

Gauthier stated that previous research has shown reservoir computing is well-suited to learning dynamical systems, and can give accurate forecasts of how they will behave in future.

Artificial neural networks, which are somewhat similar to a human brain, allow it to do this. Scientists use data from a dynamical network to create a "reservoir", which is a collection of randomly connected artificial neurons. Scientists can use the output of the network to build a better forecast of the system's future.

The more complex and accurate the forecast, the larger the network of artificial neurons must be. This will require more computing resources and time.

Gauthier stated that one issue is that artificial neurons reservoir is a "blackbox" and scientists don't know what it contains. They only know that it works.

Gauthier explained that artificial neural networks are the heart of reservoir computing and they are built on mathematics.

"We asked mathematicians to look at these networks and ask them, 'To what extent do all these parts in the machinery really need?' He said.

Gauthier and colleagues examined this question in their study. They found that the entire reservoir computing system could have been greatly simplified. This would reduce the computing resources required and save significant time.

Their concept was tested on a forecasting task that involved a weather system created by Edward Lorenz. Lorenz's work helped to understand the butterfly effect.

Their next-generation reservoir computing won this task over the current state-of-the-art Lorenz forecasting technology. The new system was 33-163 times faster than the existing model in a simple simulation on a desktop computer.

The next-generation reservoir computing was 1 million times faster when the goal was to forecast with great accuracy. Gauthier stated that the new-generation computing was able to achieve the same accuracy using only 28 neurons as the current-generation model required.

The reason the next generation of reservoir computing is so fast is because it requires less training and warmup than the current generation.

Warmup refers to training data that must be entered into the reservoir computer in order to prepare it to perform its task.

Gauthier stated that "for our next-generation reservoir computing there is almost no heating time required."

To heat it up, scientists need to input 1,000 to 10,000 data points. That's all the data that is lost. It is not necessary for the actual work. He said that we only need to input one, two, or three data points."

Once researchers have trained the reservoir computer to forecast, it will be much easier to use less data in the next-generation system.

The researchers were able to get the same results with 400 data points in their Lorenz forecasting task test as the current generation using 5,000 datapoints or more depending on how accurate they want.

Gauthier stated, "What's exciting about this next generation reservoir computing is that it takes what was already very well and makes it significantly more effective."

His colleagues and he plan to expand this work to address more complex computing problems such as forecasting fluid dynamics.

"This is a very difficult problem to solve. We are trying to find a way to speed up the process by using our simplified reservoir computing model.

Erik Bollt (professor of electrical and computer engineering at Clarkson University); Aaron Griffith (who received his Ph.D.in physics from Ohio State); and Wendson Barbosa (postdoctoral researcher, physics, at Ohio State).

Learn more A reservoir computing system to forecast and classify temporal data