Martin N P Nilsson
Update 2023-10-09: Grand overhaul of this web page. Update 2023-09-05: The article Information Processing by Neuron Populations in the Central Nervous System: Mathematical Structure of Data and Operations now on arXiv. Update 2023-05-21: The article Mechanistic explanation of neuronal plasticity and engrams using equivalent circuits now on bioRxiv (updated 2023-10-01). |
Welcome to my personal webpage, where I present my research more digestibly than in my research papers. You can visit my official profile at RISE for a more formal exposition. In addition, I have a page for avocational topics.
If you are searching for a different Martin Nilsson, I have compiled a list of namesake researchers. Should you wish to reach out, drop me an email at from.web@d…….m. Here, “d…….m” represents this webpage’s domain address.
Understanding the brain
I’m an associate professor (“docent” in Swedish) in mathematical physics at RISE. My research journey has taken me through various fields, including mathematics, biophysics, neuroscience, computer science, signal processing, mechatronics, and robotics. However, my core research since 2007 revolves around neurobiophysics. My motivations are threefold:
- To explore how biological mechanisms can inspire advanced machine learning and Artificial Intelligence techniques.
- To discover ways to assist those with nervous system injuries or impairments.
- And simply, the profound curiosity about the wonders of the mammalian brain.
The exploration of the brain encompasses multiple layers of abstraction, from the intricate biomolecular structures at the base to the complex cognitive processes at the summit (as illustrated in the Abstraction levels figure).
At the apex, the cognitive level mainly governs conscious thinking and delves into psychological realms, operating largely deterministically. On the contrary, the foundational biomolecular level is characterized by elements like ion channels and is significantly influenced by thermal noise, resulting in stochastic or random behaviors.
Fig. A rough division of Abstraction levels in the brain. Modeling the brain from a bottom-up perspective begins at the foundational level of individual neurons, gradually ascending through its complexities. Positioned between cognition and neurobiology is the conceptual space—an intermediate tier introduced by Gärdenfors. My journey has led me up to this intermediary phase (as indicated by the red arrow), yet much terrain remains to explore.
Current main results
I propose that the primary data structure in the central nervous system (CNS) can be characterized as a convex cone.
This is a robust and versatile representation. Neuron populations implement an algebra of operations on such data, including intersection, sum, projection, rejection, and negation. By combining these operations, populations efficiently and succinctly implement a variety of advanced signal processing operations.
Fig. Datastructures in the CNS. My main result is that the basic datastructure in the CNS can be described mathematically as a convex cone. Neuron populations are perfectly designed to process data in this form.
To the best of my knowledge, this is currently the sole existing model of information processing by the CNS that is fully mechanistic all the way from ion channels to neuron populations.
For details, please see my paper Information processing by neuron populations in the central nervous system: mathematical structure of data and operations, and below.
Excellent discussions of hierarchical views of the brain are Churchland et. al: What is computational neuroscience? (chapter 5, pp. 46-55) in the book Schwartz (ed.): Computational neuroscience and Ballard: Brain Computation as hierarchical abstraction.
Only mechanistic models explain
My quest revolves around seeking explanations for the workings of the nervous system and the brain. Central to this research are mechanistic models. While empirical or phenomenological models can be easily fitted to measurement data, they fall short in providing insights into the underlying mechanisms. Crafting mechanistic models, however, poses significant challenges because the system under study must be understood and cannot be a black box. The philosopher Carl F. Craver has discussed the importance of mechanistic models in the article When Mechanistic Models Explain and the book Explaining the brain, which I strongly recommend.
To illustrate this, let me share an experience starting in 2007. My curiosity in neuroscience was piqued through a collaboration with the distinguished neurophysiologist Henrik Jörntell at Lund University. Armed with long sequences of spike trains Henrik had recorded from neurons, I set out to discern inherent patterns. Upon analyzing histograms of inter-spike intervals (ISI) and dabbling with a mathematical program, I stumbled upon a straightforward model that mirrored the histograms with uncanny accuracy. Yet, this was purely an empirical model – while it matched the data well, I was left pondering the reasons behind such a good fit (as shown in the figure First empirical model).
Fig. First empirical model. The first neuron model (GESS) was entirely empirical but became quite accurate despite having only three parameters. This accuracy was perplexing! There had to be a reason for the good match. (The gamma model was an established empirical model for neuronal firing.)
I felt there had to be a concrete reason for the GESS model’s accuracy, prompting me to search deeper. This task was challenging. I focused on understanding neurons and ion channels, which are protein molecules in the cell membrane controlling the exchange of ions between the neuron’s interior and exterior.
After extensive research, I realized the behavior of these ion channels could be represented by a model where the neuron acts as a queueing system with a variable number of servers over time. This solution incorporated Markov processes and drew upon the characteristics of Charlier polynomials.
After partitioning the neuron model into three compartments, I could eventually develop a complete mechanistic model using only well-known properties of ion channels. This model explained the observed data, and its fit with the measurements was even more precise, as showcased in the Final mechanistic model figure.
Fig. Final mechanistic model. It took 14 years until the final mechanistic model was complete and published.(The kernel density estimator is a kind of improvement over the histogram, retaining more information.)
The work was eventually published in 2021, taking altogether 14 years, but the time invested resulted in a robust and dependable framework. Significantly, its reliability does not hinge on any speculative assumptions or properties.
A more detailed, popular description of the how neurons generate spike trains can be found here.
A bottom-up approach to stay mechanistic
Prioritizing mechanistic models encourages a bottom-up methodology. The rationale is straightforward: understanding the foundational building blocks is vital before constructing more complex systems. However, there’s always the lingering query—how far do we need to delve before starting the reconstruction?
Indeed, beneath the realm of ion channels exists an even more granular layer detailing gating schemes for molecular states. Yet, as we climb the abstraction ladder, statistical laws suggest that the intricacies of these schemes become less crucial.
Consequently, my endeavors have been centered around transitioning from the ion channel landscape to that of neuronal populations, establishing a robust, ground-up mechanistic foundation for what the cognitive scientist and philosopher Peter Gärdenfors terms as conceptual spaces (see also Wikipedia:conceptual space).
Here are my key publications, progressing from the level of ion channels to neuron populations:
Nilsson, M.: On the transition of Charlier polynomials to the Hermite function (arXiv, 2012; Constructive Approximation, 2021). This paper cracks a hard mathematical nut that appears when applying queueing theory to the neuron. It was a prerequisite for the following work and took four years to complete.
Nilsson, M.: Hitting time in Erlang loss systems with moving boundaries (Queueuing Systems: Theory and Practice, 2014). Queueing systems with a variable number of servers is is a notoriously tricky first-passage problem. This paper found that the queueing theory problem was solvable explicitly for an asymptotically large number of ion channels.
Nilsson, M.: The moving-eigenvalue method - hitting time for Itô processes and moving boundaries (Journal of Physics A: Mathematical and Theoretical, 2020). This paper generalizes the previous article to the continuous domain, a necessity for application to neurons. The first version used the central limit theorem for functions (a.k.a. Donsker’s theorem), which was five pages long but, unfortunately, not very accessible. Reviewers rightly asked for a major revision. I rewrote it completely, using Sturm-Liouville theory instead of queueing theory, with the added benefit that the method became generally applicable to Itô processes and arbitrary boundaries. It was decisive to have solved the problem previously using queueing theory.
Nilsson, M. and Jörntell, H.: Channel current fluctuations conclusively explain neuronal encoding of internal potential into spike trains (Physical Review E, 2021; a popular introduction to this paper). This paper explains mechanistically how a neuron encodes membrane potential as a spike train and provides massive experimental evidence. We found it necessary and sufficient in this model to divide the neuron into three compartments. The idea to model spike generation as the first passage of a moving boundary was suggested by Gluss in 1967, but this mathematical problem had not been efficiently solved until my paper on the moving-eigenvalue method in 2020.
Nilsson, M.: Mechanistic explanation of neuronal plasticity and engrams using equivalent circuits (bioRxiv, 2023). Whereas the previous paper explained the output half of a neuron, i.e., how the neuron converts membrane potential to spike trains, this paper explains the input half, or how the neuron converts spike trains to membrane potential, and most notably, neuronal plasticity, or how the neuron modifies its synapses depending on input. Crucial here is the inclusion of the synaptic cleft in the model and the arrangement of internal feedback.
Nilsson, M.: Information processing by neuron populations in the central nervous system: mathematical structure of data and operations (arXiv, 2023). Ths paper focuses on the meaning-carrying invariant of the communication between neuron populations. I show that such an invariant can be characterized as a convex cone. Neuron populations implement an efficient algebra of such structures, which has potential implications for machine learning and AI. An example is the versatility of matrix embeddings over vector embeddings. This description level is entirely above the spiking level and matches Gärdenfors’ conceptual spaces.
Neuroscience is extremely interdisciplinary
The role of biology in neuroscience is self-evident. Although I was aware that neuroscience demands an interdisciplinary approach, the extent to which mathematics, signal processing, and electronics are required was a surprise. Below is a table outlining the subjects addressed in the papers referenced earlier:
Paper | Biology | Mathematics | Electronics | Signal processing |
---|---|---|---|---|
On the transition… | Classical analysis, approximation theory, orthogonal polynomials | |||
Hitting time… | Queueing theory, probability theory | |||
The moving-eigenvalue method… | Stochastic DE, partial DE, Sturm-Liouville theory | |||
Channel current fluctuations… | Electrophysiology, neurobiology, ion channels | Coupled stochastic DE, ordinary DE | Circuit theory, transistors, filters, noise | |
Mechanistic explanation… | Neurobiology, ion channels, plasticity | Circuit theory, transistors, filters | Adaptive filter theory | |
Information processing… | Neurophysiology, population structure | Functional analysis, wavelet transforms, geometry, invariants | Adaptive filter theory |
Table Subjects dealt with in the papers. Each papers depends on the previous one, all in a sequence of increasing levels of abstraction. For future higher levels, most likely, computer science will become an essential discipline. DE = differential equations.
Speculation: What if the brain is similar to a computer?
Understanding the brain remains a complex endeavor, with many layers yet to be fully unraveled. For instance, beyond the realm of conceptual spaces, the connectivity among neuron populations takes on significant importance. At this juncture, the brain begins to show parallels with a computer.
Drawing on this analogy, if the brain mirrors a computer, it’s not too radical to think that the brain might possess something akin to software. You can delve deeper into this comparison between the brain and a computer in this speculative piece where I discuss such a hypothesis.