This is the first time in over 50 years that a new way to multiply numbers has been found. As a range of software relies on carrying out the task at great scale, the find could boost some computation speeds by up to 20%.

Graphics, artificial intelligence and scientific simulations use matrix multiplication as a fundamental computing task. Large performance gains or significant energy savings can be brought about by a small improvement in the efficiency of these Algorithms.

For hundreds of years, it was thought that the most efficient way of multiplication was based on the number of elements being used.

With a clever trick, it is possible to reduce a matrix of two rows of two numbers to seven. The addition of a computer takes less time than multiplications so it's acceptable.

The most efficient approach on most matrix sizes has remained the same for more than 50 years, although some slight improvements that aren't easy to adapt to computer code have been found. DeepMind's artificial intelligence has found a faster technique that works perfectly on current hardware. Alpha Tensor, the company's new artificial intelligence, started with no knowledge of any solutions and was presented with a problem of creating a working algorithm that could complete the task with minimum steps.

It found a way to make use of just 47 multiplications to make use of two matrices of four numbers. 70 matrices of other sizes have been improved by it.

For 44 matrices alone, Alpha Tensor found over a thousand functional algorithms. Only a small group of people were better than the state of the art. DeepMind has been working on the research for two years.

The results are not intuitive for humans. He doesn't know why the system came up with this. Why is it the best method of multiplication? It isn't clear.

Neural networks are able to see what looks good and what looks bad. I don't know how that works. There is some theoretical work to be done on how deep learning can do these types of things.

There is no guarantee that those gains would be seen on common devices, even though DeepMind found that it could boost computation speed by between 10 and 20 per cent.

Large-scale matrix multiplication can be achieved with a range of software that is run on computers and powerful hardware. It could be a sort of universal speed-up if this approach was implemented there. It would knock some percentage off most deep-learning workloads if this was implemented in the CUDA library.

The efficiency of a wide range of software could be boosted by the new algorithms because matrix multiplication is a common problem, according to Oded Lachish.

I think we will be seeing artificial intelligence-generated results for other problems similar to matrix multiplication. He says that there is significant motivation for such technology since fewer operations in an algorithm means less energy spent. If a task can be completed more efficiently, then it can be run on less powerful, less power intensive hardware, or on the same hardware in less time.

Humans are not out of a job because of DeepMind's advances. Is the programmers concerned? It could be in the future. This is an important tool in the coder's arsenal and has been done for decades in the chip design industry.

The journal's title is " Nature."

There are more on this topic.