What are all those neurons doing?
To understand what the brain is doing, we must first understand what the nerve cells (neurons)—its building blocks—are doing. This is a non-trivial problem which requires fairly advanced mathematical tools for a full analysis, but I will stay away from diving deep into formulas here. By “doing”, I’m referring to the electrical signaling between neurons that carries information, responsible for most of the neuron’s energy consumption.
A neuron consists of dendrites, which receive signals from other neurons; the axon, which conveys signals on to other neurons; and the cell body (soma), which connects the dendrites with the axon. A somewhat different division serves better for understanding the electrical signalling. Here, soma and neighbouring (proximal) portions of the dendrites together constitute one compartment; the more faraway (distal) dendrite sections form another compartment, and the axon initial segment (AIS) forms a third one (see fig. Purkinje neuron).
Fig. Purkinje neuron and its division into three compartments (CC BY 4.0)
The electrical signals are sequences of voltage spikes (action potentials). However, it is not the shape of the spikes that convey information, but the distance between the spikes. What has puzzled researchers is that these distances vary substantially, even when input is pharmacologically blocked. This effect cannot be explained by the classical Hodgkin-Huxley model, but in a paper I published with the neurophysiologist Henrik Jörntell in Physical Review E in February 2021, we show that it is possible by our three-compartment model. The main cause of the variations appears to be thermal noise affecting passages (ion channels) in the wall (membrane) of the proximal compartment.
In the three-compartment model, the input is summed (integrated) by the distal compartment, whereas the proximal compartment creates a voltage ramp and adds noise, followed by the third compartment probing the voltage level and generating a spike upon passage of a certain threshold voltage.
Fig. Model agreement with experimental data. The red solid trace marks the model’s theoretical probability density, and the blue bars is the histogram of of interspike intervals, whereas black dashed trace marks the kernel density estimator (like a histogram but better). (CC BY 4.0)
The agreement between the model and the experimental measurements is spectacular, despite the model being mechanistic and having only the minimal number (three) of free parameters (Fig. Model agreement). “Mechanistic” means that the model is built upon underlying biophysical machinery, and not on “curve fitting”. This means that we can view the model as an explanation of how the signalling actually works.
The breakthrough which enabled us to find the model was the solution of the first-passage problem I published in Journal of Physics A: Mathematical and Theoretical in October 2020. This article describes a method for computing the probability for a stochastic (random) process to pass a time-variable boundary, and can be directly applied to the neuron, where the stochastic process represents the neuron’s internal voltage (membrane potential). I held a short presentation on this method at KTH (Royal Institute of Technology) in February 2021, and a video including subtitles is available on YouTube.
Well, what exactly does the neuron do, then?
I will sketch a bit more in detail how the neuron operates. This description assumes that you are somewhat familiar with electrical circuit diagrams. I start with the famous Hodgkin-Huxley (HH) model from 1952, for which Hodgkin and Huxley received a Novel prize in 1963. There are lots of material on it available elsewhere, so suffice to say that it is a simple equivalent circuit (Fig. HH model) that was designed to describe the propagation of action potentials in axons. It models sodium (Na+) and potassium (K+) ion channels explicitly as variable resistors, and predicts the shapes of spikes and axonal propagation very well. However, it is insufficient for describing the generation of spikes in the neighborhood of the soma, because the HH model is completely determinsitic.
Fig. HH model. V is the internal (membrane) potential.
I prefer drawing the HH circuit in a somewhat different way, using (non-linear) current sources instead of variable resistors or conductances (Fig. HH model with current sources). This is particularly convenient when we are interested only in the neuron’s operation in the vicinity of the resting potential (i.e., in the potential range where the neuron spends its time when it isn’t spiking).
Fig. HH model with current sources. Here, I have included synaptic input as an additional current source.
The HH model assumes that K channels and Na channels are uniformly distributed. Actually, the K channels are more prevalent in the proximal compartment, whereas Na channels are more prevalent in the AIS compartment. To transform the HH model into our three-compartment model, taking this heterogeneous distribution into account, we first stretch out the HH model as in Fig. Stretched-out. Note that this circuit is still electrically equivalent to the circuit in Fig. HH model with current sources.
Fig. Stretched-out HH model.
As the final step, we insert two axial resistors to create the three compartments (Fig. Three-compartment circuit). Although there is no big difference in structure (you can imagine the evolution having come up with this idea!), the behaviour of the circuit changes dramatically.
Fig. Three-compartment circuit (CC BY 4.0).
The circuit operates approximately as follows: Input (Isyn) to the neuron is lowpass filtered by the distal compartment. After a spike, the proximal compartment potential Vp starts up at a relatively low value. This potential increases slowly through a current via Rdp from the distal compartment, and the more input, the faster the charging of Cp. On top of this, there is a small but stochastically varying current (IK) exported by the K channels. If the total potential in the proximal compartment reaches a certain threshold voltage, this will be sensed by the Na channels in the AIS compartment (akin to an emitter follower!), and a spike is generated. The threshold has an interesting history, so I have written a digression on this topic, the Story of the Spiking Threshold.
The stochasticisty turns out to be essential, but it also makes the mathematics complicated. Luckily, we found that this circuit is still mathematically tractable, and behaves exactly in agreement with experimental data. For a detailed explanation of the nitty-gritty details, please see the paper in Phys Rev E.
Implications for current machine learning and AI techniques
Fig. Abstraction hierarchy of biological and biologically inspired learning.
The results in Physical Review E say concretely that the behaviour of neurons is considerably simpler than previously thought, and it enables us to find compact but accurate mathematical abstractions of neurons. However, there is a lot more to do. The model can potentially help us find higher-level abstractions for aggregates of neurons (Fig. Abstraction hierarchy), which may lead to new, biologically inspired methods for knowledge representation, machine learning, and AI. A problem with today’s machine learning methods is that they require enormous amounts of data, whereas the human brain achieves impressing results with considerably less data by instead using its structure.