What is the computational complexity of a single neuron?

An artificial neural network was created by scientists to mimic a living neuron. This new approach to understanding brain cell complexity offers an entirely different way of thinking.

Our mushy brains seem a far cry from the solid silicon chips in computer processors, but scientists have a long history of comparing the two. As Alan Turing put it in 1952: “We are not interested in the fact that the brain has the consistency of cold porridge.” In other words, the medium doesn’t matter, only the computational ability.

Deep learning is the type of machine-learning that powers today’s most advanced artificial intelligence systems. Deep neural networks are algorithms that process massive amounts of data using hidden layers of interconnected, interconnected nodes. Deep neural networks, as their name implies, were inspired from real brain neural networks. The nodes are modeled after actual neurons or, at the very least, what neuroscientists learned about neuronal networks back in 1950s when the influential model of the perceptron emerged. Our understanding of how single neurons work has greatly improved, and biological neurons have been shown to be much more complicated than artificial neurons. How much, you ask?

David Beniaguev (external-link), Idan Segev (data-event click=”” data-offer URL=”https://www.sciencedirect.com/science/article/abs/pii/S0896627321005018″) and Michael London (all at the Hebrew University of Jerusalem) trained an artificial deep neural net to replicate the calculations of a biological neuron. They showed that a deep neural network requires between five and eight layers of interconnected “neurons” to represent the complexity of one single biological neuron.

These complications were not anticipated by even the authors. Beniaguev said, “I expected it to be simpler and more compact.” Beniaguev expected three to four layers to be sufficient for the capture of the calculations within each cell.

Timothy Lillicrap, who designs decisionmaking algorithms at the Google-owned AI company DeepMind, said the new result suggests that it might be necessary to rethink the old tradition of loosely comparing a neuron in the brain to a neuron in the context of machine learning. He said that the paper “really helps to force you to think about this more thoroughly and examine how far you can draw those analogies.”

How they deal with incoming information is the most fundamental analogy between real and artificial neurons. Each type of neuron receives incoming information and decides whether or not to transmit it to others. Although artificial neurons can make these decisions using a straightforward calculation, research over decades has shown that biological neurons are far more complex. To model the interaction between inputs from a neuron’s tree-like branches and its decision to emit a signal, computational neuroscientists employ an input-output function.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.