Neuron Bursts Can Mimic a Famous AI Learning Strategy

Their model needed another piece to allow them to use the teaching signal to solve credit assignment problems without putting a halt on sensory processing. Naud and Richards proposed that neurons had separate compartments at the top and bottom of their brains that process the neural code in completely distinct ways.
Our model shows that two signals can be sent simultaneously, one going up, one going down and they can both pass each other, according to Naud.

Their model suggests that tree-like branches listening to inputs from the tops neurons are only looking for burststhe internal signal. This allows them to tune their connections and reduce error. Just like backpropagation, tuning takes place from the top. In their model, neurons at the top regulate the probability that neurons below them will send bursts. Researchers found that neurons have a tendency to strengthen their connections if there are more burst signals. However, the frequency of burst signals can cause a decrease in the strength of connections. Burst signals tell neurons to be active, strengthen their connections and reduce error. The absence of bursts signals to neurons that they should not be active and may need strengthening their connections.

The neuron's bottom branches treat bursts like single spikes, which allows them to send sensory information up in the circuit without interruption.

Joo Sacramento, a computational neuroscientist from the University of Zurich, stated that the idea seemed logical in retrospect. That is brilliant, I think.

In the past, others had attempted to use a similar logic. 20 years ago, Konrad Kording of Osnabrck University in Germany and Peter Knig of Osnabrck University Germany proposed a learning system with two-compartment neuronal. Their proposal was not biologically valid and did not solve the credit assignment problem.

Kording stated that back then we didn't have the technology to test these ideas. He is impressed by the paper and will continue to work on it in his lab.

Naud and Richards were able to simulate their model using today's computing power. The learning rule was played by bursting neurons. It solves the credit assignment problem in a classical task called XOR. This requires that you learn to respond to one input (but not both) when it is 1. Their bursting rule also demonstrated that a deep neural networks could be used to approximate the backpropagation algorithm's performance on difficult image classification tasks. There is still much to be done, however, since the backpropagation algorithm was more precise and does not match human capabilities.

Naud said that there are details we don't have and that the model needs to be improved. The paper's main purpose is to show that physiological processes can approximate the learning machines do.