3. Working with Norse

For us, Norse is a tool to accelerate our own work within spiking neural networks (SNN). This page serves to describe the fundamental ideas behind the Python code in Norse and provide you with specific tools to become productive with SNN.

We will start by explaining some basic terminology, describe a suggestion to how Norse can be approached, and finally provide examples on how we have solved specific problems with Norse.

Table of content

  1. Terminology

  2. Norse workflow

  3. Solving deep learning problems with Norse

Note

You can execute the code below by hitting above and pressing Live Code.

3.1. Terminology

3.1.1. Events and action potentials

../_images/book-fig-spikes.png

Fig. 3.1 Illustration of discrete events, or spikes, from 10 neurons (y-axis) over 40 timesteps (x-axis) with events shown in white.

Neurons are famous for their efficacy because they only react to sparse (rare) events called spikes or action potentials. In a spiking network less than \(2\%\) of the neurons are active at once. In Norse, therefore, we mainly operate on binary tensors of 0’s (no events) and 1’s (spike!). Fig. 3.1 illustrates such a random sampled data with exactly \(2\%\) activation.

3.1.2. Neurons and neuron state

Neurons have parameters that determine their function. For example, they have a certain membrane voltage that will lead the neuron to spike if the voltage is above a threshold. Someone needs to keep track of that membrane voltage. If we wouldn’t, the neuron membrane would never update and we would never get any spikes. In Norse, we refer to that as the neuron state.

In code, it looks like this:

import torch
import norse.torch as norse

cell = norse.LIFCell()
data = torch.ones(1)
spikes, state = cell(data)        # First run is done without any state
# ...
spikes, state = cell(data, state) # Now we pass in the previous state
https://upload.wikimedia.org/wikipedia/commons/thumb/9/95/Action_potential_basic_shape.svg/220px-Action_potential_basic_shape.svg.png

Fig. 3.2 Shape of a typical action potential. The membrane potential remains near a baseline level until at some point in time, it abruptly spikes upward and then rapidly falls.

(c) CC BY-SA 3.0

States typically consist of two values: v (voltage), and i (current).

  • Voltage (v) illustrates the difference in “electic tension” in the neuron membrane. The higher the value, the more tension and better chance to arrive at a spike. In Fig. 3.2 the spike arrives at the peak of the curve, followed by an immediate reset and recovery. This is crucial for emitting spikes: if the voltage never increases - no spike!

  • Current (i) illustrates the incoming current, which will be integrated into the membrane potential v and decays over time.

3.1.3. Neuron dynamics and time

Norse solves two of the hardest parts about running neuron simulations: neural equations and temporal dynamics. We provide a long list of neuron model implementations, as listed in our documentation that is free to plug’n’play.

For each model, we distinguish between time and recurrence as follows (using the Long short-term memory neuron model as an example):

Without time

With time

Without recurrence

LSNNCell

LSNN

With recurrence

LSNNRecurrentCell

LSNNRecurrent

In other words, the LSNNCellis *not* recurrent, and expects the input data to *not* have time, while theLSNNRecurrent`` is recurrent and expects the input to have time in the first dimension.

3.2. Norse workflow

Norse is meant to be used as a library. Specifically, that means taking parts of it and remixing to fit the needs of a specific task. We have tried to provide useful, documented, and correct features from the spiking neural network domain, such that they become simple to work with.

The two main differences from artificial neural networks is 1) the state variables containing the neuron parameters and 2) the temporal dimension (see Introduction to spiking systems). Apart from that, Norse works like you would expect any PyTorch module to work.

When working with Norse we recommend that you consider two things

  1. Neuron models

  2. Learning algorithms and/or plasticity models

3.2.1. Deciding on neuron models

The choice of neuron model depends on the task. The leaky integrate-and-fire neuron model is one of the most common. Many more neuron models exist and can be found in our documentation: https://norse.github.io/norse/generated/norse.torch.html#neuron-models

3.2.2. Deciding on learning/plasiticy models

Optimisation is mainly done using PyTorch’s optimizers, as seen in the MNIST task. We have implemented SuperSpike and many other surrogate gradient methods that lets you seamlessly integrate with Norse.

For more details, see our documentation on activation functions.

3.3. Solving deep learning problems with Norse

As you have seen, Norse can be applied immediately for both fundamental research and deep learning problems. Below, we will show how two such problems have been solved.

3.3.1. Porting existing deep learning problems

A classical example of deep learning can be seen in the MNIST task where we convert the MNIST dataset into sparse discrete events and solve the task with >90% accuracy using convolutions.

3.3.2. Extending existing models

An example of this can be seen in the memory task, where adaptive long short-term spiking neural networks are added to solve temporal memory problems.