Although our mushy brains may seem far removed from the solid silicon chips found in computer processors' computers, scientists have long been comparing them. Alan Turing stated it in 1952: The brain's consistency is irrelevant. It is only the computational abilities that matter.
Deep learning is the type of machine-learning that powers today's most advanced artificial intelligence systems. Deep neural networks are algorithms that process massive amounts of data using hidden layers of interconnected nodes. Deep neural networks are named after the real brain's neural networks. The nodes were modeled after neurons, or at least after what neuroscientists learned about them back in the 1950s when the influential neuron model the perceptron was created. Our understanding of the computational complexity and interdependence of single neurons has greatly improved over time. Biological neurons are more complex than artificial neurons. But how much?
David Beniaguev (Idan Segev) and Michael London (all at the Hebrew University of Jerusalem), created an artificial deep neural network that mimics the computations of a simulated human neuron. To represent the complexity of a single biological neuron, a deep neural network must have between five to eight layers of interconnected neurons.
Even the authors didn't anticipate this level of complexity. Beniaguev said that he expected it to be simpler and more compact. He thought that it would only take three to four layers to capture all the calculations within the cell.
Timothy Lillicrap is a DeepMind AI company's decision-making algorithm designer. He said that the new results suggest that it may be necessary to reconsider the old tradition of loosely linking a neuron from the brain with a neural network in machine learning. He said that this paper forces you to think about it more deeply and consider how far you can draw analogies.
How they deal with incoming information is the most fundamental analogy between real and artificial neurons. Both types of neurons receive incoming information and decide whether or not to send it to other neurons based on that information. Although artificial neurons can make this decision using a simple calculation, decades of research has shown that biological neurons are far more complex. To model the relationship between inputs received by biological neurons (dendrites), and the neurons decision, computational neuroscientists use an output-output function.
The new research taught the artificial deep neural network how to mimic this function in order to establish its complexity. From a rats cortex, they created a huge simulation of the input-output function for a particular type of neuron that had distinct branches of dendritic branch at its top and bottom. This is called a pyramidal neural network. They then fed the simulation into a deep neural system with up to 256 artificial neuronal layers. They increased the number of layers to achieve 99% accuracy between the inputs and outputs of each simulated neuron. Deep neural networks were able to predict the input-output behavior of neurons with at least five, but not more than eight artificial layers. This equates to approximately 1,000 artificial neurons per biological neuron in most networks.