Skip to main content
Research Discovery

New model of neural processing could help us understand the brain and create better AI

By November 22, 2019October 6th, 2021No Comments

Many people are familiar with “brain waves” — rhythmic oscillations in neural activity across the brain that, for instance, can be observed during a sleep study. But little is known about how these rhythmic patterns across networks of neurons relate to the electrical activity of individual neurons, and how they might work in concert to enable the brain to perform complex computations. 

E. Paxon Frady

In a new paper published in the Proceedings of the National Academy of Sciences, postdoctoral fellow E. Paxon Frady and Adjunct Professor of Neuroscience Friedrich Sommer describe their new computational model of neural processing that takes into account both spikes and rhythms. Their model could be used to gain insight into brain function and also to improve artificial intelligence. Both scientists currently serve as researchers-in-residence in Intel’s Neuromorphic Computing Lab to explore how their oscillatory computing model might advance state-of-the-art neuromorphic silicon chips. 

Fritz Sommer

In this Q&A, Frady and Sommer, both from the Redwood Center for Theoretical Neuroscience and the Helen Wills Neuroscience Institute, help us understand their work and its implications. 

What motivated you to do this study?

Recording electrical signals from behaving brains is a key technique in neuroscience. Micro-electrodes placed within the brain reveal electrical pulses, or “spikes”, generated by individual neurons. The rate a cell produces spikes often correlates with specific behaviors or stimuli. Over the last 60 years, experiments of this type have enabled the functional characterization of neurons in a multitude of brain regions. This has led to the theory of rate-coding, where the spiking rate of a neuron is the primary internal representation of information in the brain.  

The same recordings also reveal rhythmic signals, which range in frequencies from a few to a few hundred cycles per second. Such signals also correlate with behavior and are often synchronized with spiking activity. Although discovered more than hundred years ago, the interpretation of such “brain waves” is still debated and has not yet been incorporated into a theory of neural computation. 

The lack of brain theories that include spike timing and rhythms also manifests in machine learning — the most successful artificial neural networks to date compute with continuous signals analogous to rates, and not directly with spikes.

Our study proposes a theory and computational models showing how neurons can compute robustly and efficiently using spikes and rhythms. The theory shows how information can be represented with precise spike timing, and establishes a link between spikes, oscillations, and their role in information processing.   

What did you do in your study and what were your general conclusions?
 
We propose a theory of computation with spiking neurons in which brain waves are essential. According to the theory, a spike encodes analog information in its exact timing relative to spikes of other neurons. The network rhythm provides a reference clock, which enables computations encoded by spike times to work robustly, even in recurrent circuits and under noisy conditions.

How to robustly compute with spiking neural networks has been a long-standing challenge. Our theory solves this problem by establishing a relationship between spiking neurons and artificial neurons whose states are complex numbers. Such complex-valued networks have been proposed before for pattern completion and error correction in optical computers, but their relevance to neuroscience was unclear. 

In essence, our theory translates a complex-valued artificial neural network performing a desired computation into a spiking neural network performing the same computation. Specifically, the phase of a complex neural state is translated into a spike time, and a complex-valued synaptic weight is translated into a synapse between the spiking neurons with specific delay and strength.

What do you think is the most interesting or surprising outcome of the study?

In a seminal paper, John Hopfield proposed an associative network for memory-based pattern completion, which could be a building block in the brain for storing and accessing relevant learned information, as well as for general error correction. We demonstrate a significant advantage of associative networks with complex-valued neurons and synapses over their real-valued counterparts. While the Hopfield network stores only binary patterns, the new model can store analog-valued patterns and achieves higher information capacity per synapse. The spiking neural network inherits these advantages from the complex-valued associative network.

The resonate-and-fire neuron is one of the simplest neuron models that endows single neurons with oscillatory properties. Our study shows how networks of resonate-and-fire neurons, where synapses are complex or have delays, can be used for robust computation according to our theory.  

Further, our study shows how circuits of integrate-and-fire neurons, the simplest model of a spiking neuron (which has no intrinsic oscillatory behavior), can also be used for robust computation according to our theory in a biologically plausible network. The critical ingredient required is a set of feedback inhibitory neurons, which provides postsynaptic inhibition that is balanced with excitation in strength, but offset in time. The network model shows how rhythms, conduction delays, and feedback inhibition contribute to efficient and robust computation with spikes.

What implications does this have for computing and artificial intelligence?

Much of the recent advances in artificial intelligence come from neural networks inspired by the brain. 

Our theory provides a new link between brains and complex-valued artificial neural networks. Some research has already shown advantages of complex-valued representations in machine learning. The new link will enable the design of better artificial intelligence systems by drawing direct inspirations from real brains.

Further, the theory can also guide the programming of recently developed neuromorphic hardware computer chips (such as Intel’s Loihi). By leveraging the unprecedented low energy consumption of these chips, this model of neural computation could provide energy-efficient alternatives to standard neural networks run on CPUs or GPUs.

How will this research advance our knowledge of how the brain works?

The next frontier in neuroscience is understanding how circuits in the brain perform computations that constitute a brain function. To address this challenge, recording techniques have recently been engineered that can monitor thousands of neurons or more simultaneously. But to leverage such experimental data, a theory is needed that makes testable predictions how spiking neurons can produce powerful and robust computation.

This research enables building models with spiking neurons that perform computations that are postulated for constituting a specific behavior. Such models make testable predictions that can be compared with data from recording experiments. The repeated interaction between theory and experiment can elucidate the fundamental question how circuits in the brain process information.

By Rachel Henderson

Additional Information